Default erasure code profile and sustaining loss of one host containing 4 OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello

 

I am currently trying to find out if Ceph can sustain the loss of a full host (containing 4 OSDs) in a default erasure coded pool (k=2, m=1). We have currently have a production EC pool with the default erasure profile, but would like to make sure the data on this pool remains accessible even after one of our hosts fail. Since we have a very small cluster (6 hosts, 4 OSDs per host), I created a custom CRUSH rule to make sure the 3 chunks are spread over 3 hosts, screenshot here: https://gyazo.com/1a3ddd6895df0d5e0e425774d2bcb257 .

 

Unfortunately, taking one node offline results  in reduced data availability and incomplete PGs, as shown here: https://gyazo.com/db56d5a52c9de2fd71bf9ae8eb03dbbc .

 

My question summed up: is it possible to sustain the loss of a host containing 4 OSDs using a k=2, m=1 erasure profile using a CRUSH map that spreads data over at least 3 hosts? If so, what am I doing wrong? I realize the documentation states that m equals the amount of OSDs that can be lost, but assuming a balanced CRUSH map is used I fail to see how this is required.

 

Many thanks in advance.

 

Kind regards

Ziggy Maes
DevOps Engineer

http://www.be-mobile.com/mail/bemobile_email.png

www.be-mobile.com

 

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux