Erasure Encoding Chunks > Number of Hosts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

 

Would anybody have an idea a) If it’s possible and b) if it’s a good idea    

to have more EC chunks than the total number of hosts?

 

For instance if I wanted to have a k=6 m=2, but only across 4 hosts and I wanted to be able to withstand 1 host failure and 1 disk failure(any host), would a crush map rule be able to achieve that?

 

Ie It would first instruct data to be 1st split evenly across hosts and then across OSD’s?

 

If I set the erasure profile failure domain to OSD and the crushmap to chooseleaf host, will this effectively achieve what I have described?

 

I would be interested in doing this for two reasons, one being for better increased capacity than k=2 m=2 and the other is that when I expand this cluster in the near future to 8 hosts I won’t  have to worry about re-creating the pool. I fully understand I would forfeit the ability to withstand to lose 2 hosts, but I would think this to be quite an unlikely event having only 2 hosts to start with.

 

Many thanks,

Nick


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux