Re: Matching shard to crush bucket in erasure coding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 22 Aug 2017, Oleg Kolosov wrote:
> Hi
> I'm working on a new erasure code, different than the one implemented
> in ceph lrc plugin. I'm trying to understand how ceph lrc plugin
> allocates shards in a layer, but it's not clear from the code.
> 
> For example, say I have a local group containing shards 0,1,2,5
> While 0,1,2 are data and 5 is parity. I'd like to place all of these
> shards in the same rack.
> 
> Where in lrc .cc code  these shards, which are composing a single
> layer, are allocated to the same rack?

The EC code has to be carefully matched with a CRUSH rule that lays things 
out properly.  This is the reason that the EC crush rules are tied 
to the erasure code profiles and the plugin handles rule creation.

In the LRC case, you have a rule like

 take root
 choose 3 rack
 choose 5 osd

to get a set of 15 shards, in groups of 5.  The LRC plugin has some logic 
to put the original data shards in the first 3 or 4 shards of each of 
those sets of 5, and the parity blocks in the remaing slots.  You'll want 
to do something similar.  Bonus points if you can reuse the scheme that 
LRC uses...

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux