Crushmap rule for multi-datacenter erasure coding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We have a 3-site Ceph cluster and would like to create a 4+2 EC pool with 2 chunks per datacenter, to maximise the resilience in case of 1 datacenter being down. I have not found a way to create an EC profile with this 2-level allocation strategy. I created an EC profile with a failure domain = datacenter but it doesn't work as, I guess, it would like to ensure it has always 5 OSDs up (to ensure that the pools remains R/W) where with a failure domain = datacenter, the guarantee is only 4. My idea was to create a 2-step allocation and a failure domain=host to achieve our desired configuration, with something like the following in the crushmap rule:

step choose indep 3 datacenter
step chooseleaf indep x host
step emit

Is it the right approach? If yes, what should be 'x'? Would 0 work?

From what I have seen, there is no way to create such a rule with the 'ceph osd crush' commands: I have to download the current CRUSHMAP, edit it and upload the modified version. Am I right?

Thanks in advance for your help or suggestions. Best regards,

Michel

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux