Re: best practices for expanding hammer cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 19/07/17 15:14, Laszlo Budai wrote:
> Hi David,
> 
> Thank you for that reference about CRUSH. It's a nice one.
> There I could read about expanding the cluster, but in one of my cases we want to do more: we want to move from host failure domain to chassis failure domain. Our concern is: how will ceph behave for those PGs where all the three replicas currently are in the same chassis? Because in this case according to the new CRUSH map two replicas are in the wrong place.
> 
> Kind regards,
> Laszlo

Changing crush rules resulting in PGs being remapped works exactly the same way as changes in crush weights causing remapped data. The PGs will be remapped in accordance with the new crushmap/rules and then recovery operations will copy them over to the new OSDs as usual. Even if a PG is entirely remapped, the OSDs that were originally hosting it will operate as an acting set and continue to serve I/O and replicate data until copies on the new OSDs are ready to take over - ceph won't throw an upset because the acting set doesn't comply with the crush rules. I have done, for instance, a crush rule change which resulted in an entire pool being entirely remapped - switching the cephfs metadata pool from an HDD root to an SSD root rule, so every single PG was moved to a completely different set of OSDs - and it all continued to work fine while recovery took place.

Rich

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux