_______________________________________________Hello Caspar
That makes a great deal of sense, thank you for elaborating. Am I correct to assume that if we were to use a k=2, m=2 profile, it would be identical to a replicated pool (since there would be an equal amount of data and parity chunks)? Furthermore, how should the proper erasure profile be determined then? Are we to strive for a as high as possible data chunk value (k) and a low parity/coding value (m)?
From: Caspar Smit <casparsmit@xxxxxxxxxxx>
Date: Friday, 20 July 2018 at 14:15
To: Ziggy Maes <ziggy.maes@xxxxxxxxxxxxx>
Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: Default erasure code profile and sustaining loss of one host containing 4 OSDs
Ziggy,
For EC pools: min_size = k+1
So in your case (m=1) -> min_size is 3 which is the same as the number of shards. So if ANY shard goes down, IO is freezed.
If you choose m=2 min_size will still be 3 but you now have 4 shards (k+m = 4) so you can loose a shard and still remain availability.
Of course a failure domain of 'host' is required to do this but since you have 6 hosts that would be ok.
Met vriendelijke groet,
Caspar Smit
Systemengineer
SuperNAS
Dorsvlegelstraat 13
1445 PA Purmerend
t: (+31) 299 410 414
e: casparsmit@xxxxxxxxxxx
w: www.supernas.eu
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com