Re: Tip for erasure code profile?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 03/05/2019 23:56, Maged Mokhtar wrote:


On 03/05/2019 17:45, Robert Sander wrote:
Hi,

I would be glad if anybody could give me a tip for an erasure code
profile and an associated crush ruleset.

The cluster spans 2 rooms with each room containing 6 hosts and each
host has 12 to 16 OSDs.

The failure domain would be the room level, i.e. data should survive if
one of the rooms has a power loss.

Is that even possible with erasure coding?
I am only coming up with profiles where m=6, but that seems to be a
little overkill.

Regards


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


i think if you have k=m it should work: k=2,m=2 k=3,m=3 ..etc

for k=3 m=3 the rule could be something like:
type erasure
min_size 6
max_size 6
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default
step choose indep 2 type room
step chooseleaf indep 3 type host
step emit

/Maged
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


For the above to work when a room is down, it requires to set the pool min_size to k, which is not desirable.
To support pool min_size = k+1 (default) then:
m = k + 2
so k=2 m=4, k=3 m=5 are viable options.

Example for k=3 m=5
pool size=8, min_size=4

rule:
type erasure
min_size 8
max_size 8
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default
step choose indep 2 type room
step chooseleaf indep 4 type host
step emit

The storage overhead of k=3 m=5 is high, but still better (capacity wise) than a replicated pool size=4 min_size=2 pool to support 2 rooms, but is it better over-all, maybe not.

/Maged



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux