Re: best practices for expanding hammer cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This was recently covered on the mailing list. I believe this will cover all of your questions.

https://www.spinics.net/lists/ceph-users/msg37252.html


On Tue, Jul 18, 2017, 9:07 AM Laszlo Budai <laszlo@xxxxxxxxxxxxxxxx> wrote:
Dear all,

we are planning to add new hosts to our existing hammer clusters, and I'm looking for best practices recommendations.

currently we have 2 clusters with 72 OSDs and 6 nodes each. We want to add 3 more nodes (36 OSDs) to each cluster, and we have some questions about what would be the best way to do it. Currently the two clusters have different CRUSH maps.

Cluster 1
The CRUSH map only has OSDs, hosts and the root bucket. Failure domain is host.
Our final desired state would be:
OSD - hosts - chassis - root where each chassis has 3 hosts, each host has 12 OSDs, and the failure domain would be chassis.

What would be the recommended way to achieve this without downtime for client operations?
I have read about the possibility to throttle down the recovery/backfill using
osd max backfills = 1
osd recovery max active = 1
osd recovery max single start = 1
osd recovery op priority = 1
osd recovery threads = 1
osd backfill scan max = 16
osd backfill scan min = 4

but we wonder about the situation when, in a worst case scenario, all the replicas belonging to one pg have to be migrated to new locations according to the new CRUSH map. How will ceph behave in such situation?


Cluster 2
the crush map already contains chassis. Currently we have 3 chassis (c1, c2, c3) and 6 hosts:
- x1, x2 in chassis c1
- y1, y2 in chassis c2
- x3, y3 in chassis c3

We are adding hosts z1, z2, z3 and our desired CRUSH map would look like this:
- x1, x2, x3 in c1
- y1, y2, y3 in c2
- z1, z2, z3 in c3

Again, what would be the recommended way to achieve this while the clients are still accessing the data?

Is it safe to add more OSDs at a time? or we should add them one by one?

Thank you in advance for any suggestions, recommendations.

Kind regards,
Laszlo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux