Re: Best practices for extending a ceph cluster with minimal client impact data movement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Wido,

only to clarify things: I checked some osd daemons with the following command:

$ sudo ceph daemon osd.42 config show | grep backfills
    "osd_max_backfills": "1",

$ sudo ceph daemon osd.42 config show | grep recovery_threads
    "osd_recovery_threads": "1",

So it seem's we already throttled the backfilling and recovery. Am I right?

Since in the ceph.conf we have not set anything related to backfills
and recovery. So I'm wondering if this setting is the default?

Best,
Martin

On Wed, Aug 10, 2016 at 9:17 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
>
>> Op 9 augustus 2016 om 17:44 schreef Martin Palma <martin@xxxxxxxx>:
>>
>>
>> Hi Wido,
>>
>> thanks for your advice.
>>
>
> Just keep in mind, you should update the CRUSHMap in one big bang. The cluster will be calculating and peering for 1 or 2 min and afterwards you should see all PGs active+X.
>
> Then the waiting game starts, get coffee, some sleep and wait for it to finish.
>
> By throttling recovery you prevent this to become slow for the clients.
>
> Wido
>
>> Best,
>> Martin
>>
>> On Tue, Aug 9, 2016 at 10:05 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
>> >
>> >> Op 8 augustus 2016 om 16:45 schreef Martin Palma <martin@xxxxxxxx>:
>> >>
>> >>
>> >> Hi all,
>> >>
>> >> we are in the process of expanding our cluster and I would like to
>> >> know if there are some best practices in doing so.
>> >>
>> >> Our current cluster is composted as follows:
>> >> - 195 OSDs (14 Storage Nodes)
>> >> - 3 Monitors
>> >> - Total capacity 620 TB
>> >> - Used 360 TB
>> >>
>> >> We will expand the cluster by other 14 Storage Nodes and 2 Monitor
>> >> nodes. So we are doubling the current deployment:
>> >>
>> >> - OSDs: 195 --> 390
>> >> - Total capacity: 620 TB --> 1250 TB
>> >>
>> >> During the expansion we would like to minimize the client impact and
>> >> data movement. Any suggestions?
>> >>
>> >
>> > There are a few routes you can take, I would suggest that you:
>> >
>> > - set max backfills to 1
>> > - set max recovery to 1
>> >
>> > Now, add the OSDs to the cluster, but NOT to the CRUSHMap.
>> >
>> > When all the OSDs are online, inject a new CRUSHMap where you add the new OSDs to the data placement.
>> >
>> > $ ceph osd setcrushmap -i <new crushmap>
>> >
>> > The OSDs will now start to migrate data, but this is throttled by the max recovery and backfill settings.
>> >
>> > Wido
>> >
>> >> Best,
>> >> Martin
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users@xxxxxxxxxxxxxx
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux