Replication vs Erasure Coding with only 2 elements in the failure-domain.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,


We have (only) 2 separate "rooms" (crush bucket) and would like to build a cluster being able to handle the complete loss of one room.


First idea would be to use replication:

-> As we read the mail thread "2x replication: A BIG warning", we would chose a replication size of 3.

-> We need to change the default ruleset {bucket-type} to room. ( as described here http://docs.ceph.com/docs/master/rados/operations/crush-map/#crushmaprules )

We created for that a new crush rule:


rule replicated_ruleset_new {

        ruleset 3

        type replicated

        min_size 1

        max_size 10

        step take default

        step choose firstn 2 type room

        step chooseleaf firstn 2 type host

        step emit

}



Second idea would be to use Erasure Coding, as it fits our performance requirements and would use less raw space.


Creating an EC profile like:

   ?ceph osd erasure-code-profile set eck2m2room k=2 m=2 ruleset-failure-domain=room?

and a pool using that EC profile, with ?ceph osd pool create ecpool 128 128 erasure eck2m2room? of course leads to having ?128 creating+incomplete? PGs, as we only have 2 rooms.


Is there somehow a way to store the ?parity chuncks? (m) on both rooms, so that the loss of a room would be possible ?


If I understood correctly, an Erasure Coding of for example k=2, m=2, would use the same space as a replication with a size of 2, but be more reliable, as we could afford the loss of more OSDs at the same time.

Would it be possible to instruct the crush rule to store the first k and m chuncks in room 1, and the second k and m chuncks in room 2 ?



Many thanks for your feedbacks !


Thanks !

Fran?ois
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170307/2a930d64/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux