Caspar,
Thank you for your reply. I’m in all honesty still not clear on what value to use for min_size. From what I understand, it should be be set to the sum of k+m for erasure coded pools, as it is set by default.
Additionally, could you elaborate why m=2 would be able to sustain a node failure? As stated, we have 6 hosts containing 4 OSDs (so 24) total. What would m=2 achieve that m=1 would not?
Kind regards
Ziggy Maes
DevOps Engineer
CELL +32 478 644 354
SKYPE Ziggy.Maes
www.be-mobile.com
From: Caspar Smit <casparsmit@xxxxxxxxxxx>
Date: Friday, 20 July 2018 at 13:36
To: Ziggy Maes <ziggy.maes@xxxxxxxxxxxxx>
Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] Default erasure code profile and sustaining loss of one host containing 4 OSDs
2018-07-20 13:11 GMT+02:00 Ziggy Maes <ziggy.maes@xxxxxxxxxxxxx>:
Hello
I am currently trying to find out if Ceph can sustain the loss of a full host (containing 4 OSDs) in a default erasure coded pool (k=2, m=1). We have currently have a production
EC pool with the default erasure profile, but would like to make sure the data on this pool remains accessible even after one of our hosts fail. Since we have a very small cluster (6 hosts, 4 OSDs per host), I created a custom CRUSH rule to make sure the 3
chunks are spread over 3 hosts, screenshot here:
https://gyazo.com/1a3ddd6895df0d5e0e425774d2bcb257 .
Unfortunately, taking one node offline results in reduced data availability and incomplete PGs, as shown here:
https://gyazo.com/db56d5a52c9de2fd71bf9ae8eb03dbbc .
My question summed up: is it possible to sustain the loss of a host containing 4 OSDs using a k=2, m=1 erasure profile using a CRUSH map that spreads data over at least 3 hosts?
If so, what am I doing wrong? I realize the documentation states that m equals the amount of OSDs that can be lost, but assuming a balanced CRUSH map is used I fail to see how this is required.
Many thanks in advance.
Kind regards
Ziggy Maes
DevOps Engineer
www.be-mobile.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com