What crush ruleset for a given SHEC configuration ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Takeshi,

In the context of http://ceph.com/docs/master/rados/operations/erasure-code-shec/ it would be useful to have a more detailed explanation of why SHEC is more efficient during recovery (in the introduction). 

Am I correct to assume that SHEC does not provide a way to control the locality of the chunks ? For instance in the following scenario:

rack 1 has 10 OSDs
rack 2 has 10 OSDs

a crush ruleset is made to provide 15 OSDs with 7 in the first rack, 8 in the last rack: the first 7 are in rack 1, the last 8 in rack 2. When SHEC is used with such a crush ruleset, it cannot guarantee that the loss of one chunk in rack 2 can always be recovered with chunks from rack 2. When reading at figure 3 of

https://wiki.ceph.com/Planning/Blueprints/Hammer/Shingled_Erasure_Code_%28SHEC%29

with D1 to D5, P1 and P2 in rack 1 and D6 to D10, P3, P4, P5 in rack 2, my understanding is that to recover D6 which is in rack 2 it may be necessary to use P2 from rack 1. And to recover D5 which is in rack 1 it may be necessary to use P3 from rack 2.

Maybe I'm missing something ? Thanks in advance for your explanations :-)

Cheers
-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux