Replication vs Erasure Coding with only 2 elementsinthe failure-domain.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


On 03/07/2017 05:53 PM, Francois Blondel wrote:
>
> Hi all,
>
>
> We have (only) 2 separate "rooms" (crush bucket) and would like to 
> build a cluster being able to handle the complete loss of one room.
>

*snipsnap*
>
> Second idea would be to use Erasure Coding, as it fits our performance 
> requirements and would use less raw space.
>
>
> Creating an EC profile like:
>
>    ?ceph osd erasure-code-profile set eck2m2room k=2 m=2 
> ruleset-failure-domain=room?
>
> and a pool using that EC profile, with ?ceph osd pool create ecpool 
> 128 128 erasure eck2m2room? of course leads to having ?128 
> creating+incomplete? PGs, as we only have 2 rooms.
>
>
> Is there somehow a way to store the ?parity chuncks? (m) on both 
> rooms, so that the loss of a room would be possible ?
>
>
> If I understood correctly, an Erasure Coding of for example k=2, m=2, 
> would use the same space as a replication with a size of 2, but be 
> more reliable, as we could afford the loss of more OSDs at the same time.
>
> Would it be possible to instruct the crush rule to store the first k 
> and m chuncks in room 1, and the second k and m chuncks in room 2 ?
>

As far as I understand erasure coding there's no special handling for 
parity or data chunks. To assemble an EC object you just need k chunks, 
regardless whether they are data or parity chunks.

You should be able to distribute the chunks among two rooms by creating 
a new crush rule:

- min_size 4
- max_size 4
- step take <first room>
- step chooseleaf firstn 2 type host
- step emit
- step take <second room>
- step chooseleaf firstn 2 type host
- step emit

I'm not 100% sure about whether chooseleaf is correct or another choose 
step is necessary to ensure that two osd from differents hosts are 
chosen (if necessary). The important point is using two choose-emit 
cycles and using the correct start points. Just insert the crush labels 
for the rooms.

This approach should work, but it has two drawbacks:

- crash handling
In case of host failing in a room, the PG from that host will be 
replicated to another host in the same room. You have to ensure that 
there's enough capacity in each rooms (vs. having enough capacity in the 
complete cluster), which might be tricky.

- bandwidth / host utilization
Almost all ceph based applications/libraries use the 'primary' osd for 
accessing data in a PG. The primary OSD is the first one generated by 
the crush rule. In the upper example, the primary OSDs will all be 
located in the first room. All client traffic will be heading to hosts 
in that room. Depending on your setup this might not be a desired solution.

Unfortunately I'm not aware of a solution. It would require to replace 
'step take <first room>' with 'step take <one room>' and 'step take 
<second room>' with 'step take <a different room>'. Iteration is not 
part of crush as far as I know. Maybe someone else can give some more 
insight into this.

Regards,
Burkhard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170308/3e7da849/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux