RE: Simplified LRC in CEPH

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 1 Aug 2014, Andreas Joachim Peters wrote:
> Hi Loic, 
> your initial scheme is more flexible. Nevertheless I think that a 
> simplified description covers 99% of peoples use cases.
> 
> You can absorb the logic of implied parity into your generic LRC plug-in 
> if you find a good way to describe something like 'compute but don't' 
> store'. If you then offer additionally the poor man's description 
> (k,m,l) one cover's the simple and complex use case!

Yeah.

I like the part of Loic's strategy where we can do a search for 
different repair scenarios and choose the one that minimizes the cost 
(in terms of IO or CPU or whatever).  If it can express the xorbas-style 
code with an implied block, than I suspect we would want to do something 
similar internally anyway.

In any case, being able to simply say l=... would definitely be a win.

sage


> 
> Cheers Andreas.
> 
> ________________________________________
> From: ceph-devel-owner@xxxxxxxxxxxxxxx [ceph-devel-owner@xxxxxxxxxxxxxxx] on behalf of Loic Dachary [loic@xxxxxxxxxxx]
> Sent: 01 August 2014 15:14
> To: Andreas Joachim Peters; ceph-devel@xxxxxxxxxxxxxxx
> Subject: Re: Simplified LRC in CEPH
> 
> Hi Andreas,
> 
> It probably is just what we need. Although https://github.com/ceph/ceph/pull/1921 is more flexible in terms of chunk placement, I can't think of a use case where it would actually be useful. Maybe it's just me being back from hollidays but it smells like a solution to a non existent problem ;-) The other difference is that your proposal does not allow for nested locality. I.e. datacenter locality and rack locality within a datacenter, for instance. What do you think ?
> 
> Cheers
> 
> On 01/08/2014 18:29, Andreas Joachim Peters wrote:
> > Hi Loic,
> >
> >> It would, definitely. How would you control where data / parity chunks are located ?
> >
> > I ordered the chunks after encoding in this way:
> >
> > ( 1 2 3 4 LP1 ) ( 5 6 7 8 LP2 ) ( 9 10 11 12 LP3 ) ( 13 14 15 16 LP4 ) ( R2 R3 R4 LP5 )
> >
> > Always (k/l)+1 consecutive chunks belong location-wise together ...  and I demand that (k/l) <= m
> >
> > That is probably straight forward to express in a crush rule.
> >
> > Cheers Andreas.
> >
> > PS: just one correction, I wrote 'degree' but it is called the 'distance' of the code
> >
> > ________________________________________
> > From: Loic Dachary [loic@xxxxxxxxxxx]
> > Sent: 01 August 2014 14:32
> > To: Andreas Joachim Peters; ceph-devel@xxxxxxxxxxxxxxx
> > Subject: Re: Simplified LRC in CEPH
> >
> > Hi Andreas,
> >
> > Enlightening explanation, thank you !
> >
> > On 01/08/2014 13:45, Andreas Joachim Peters wrote:
> >> Hi Loic et. al.
> >>
> >> I managed to prototype (and understand) LRC encoding similiar to Xorbas in the ISA plug-in.
> >>
> >> As an example take a (16,4) code (which gives nice alignment for 4k blocks) :
> >>
> >> For 4 sub groups of the data chunks you build e.g. local parities LP1-LP4
> >>
> >> LP1 = 1 ^ 2 ^ 3 ^ 4
> >>
> >> LP2 = 5 ^ 6 ^ 7 ^ 8
> >>
> >> LP3 = 9 ^ 10 ^ 11 ^ 12
> >>
> >> LP4 = 13 ^ 14 ^ 15 ^ 16
> >>
> >> You do normal erasure encoding with 4 RS chunks:
> >>
> >> RS(1..16) = (R1, R2, R3, R4)
> >>
> >> You compute the local parity LP5 for the erasure chunks:
> >>
> >> LP5 = R1 ^ R2 ^ R3 ^ R4
> >>
> >> The relation which holds for Vandermonde matrices (because the first matrix row contains only 1's)
> >>
> >> LP1 ^ LP2 ^ LP3 ^ LP4 = R1
> >>
> >> So you need to store only 24 chunks (not 25):
> >>
> >> (1 .. 16) (R2,R3,R4) (LP1,LP2,LP3,LP4,LP5)
> >>
> >> Side remark: in this simplified explanation I imply R1, not LP5 as described in the Xorbas paper
> >
> > Does it make a difference or is it equivalent ?
> >
> >> The degree of the code is 5 e.g. you can construct a failure with 5 losses where you loose data, while if you are 'lucky' the code can even repair up to 8 failures (one loss in each sub group + LP5,R2,R3,R4).
> >>
> >> The reconstruction traffic for single failures is:
> >>
> >> [(20 x 4) + (4 x 8)]/24 =~ 4.66 x [disk size] instead of 16 x [disk size]
> >>
> >> There are three repair scenarios:
> >>
> >> 1) only single failures in any of the local groups using LRC repair (simple parities)
> >> 2) multiple failures which can be reconstructed with RS parities without LRC repair
> >> 3) multiple failures which can be reconstructed with RS parities after LRC repair
> >>
> >> [ 4) reconstruction impossible  ]
> >>
> >> Having your proposed LRC layer (decoding) model in mind there is a certain contradiction here because there are cases where it is not required to use LRC since you can resolve all the failures with RS alone.
> >>
> >> In the end I think, it is sufficient if we introduce a parameter l in the EC parameter list which defines the number of subgroups in the data chunks and imply to use always one local parity for all RS chunks. So you can specify an LRC easily with three simple parameters:
> >>
> >> k=16 m=4 l=4
> >>
> >> The Xorbas configuration would be written as k=10 m=4 l=2
> >>
> >> Wouldn't that be much simpler and sufficient? What do you think?
> >
> > It would, definitely. How would you control where data / parity chunks are located ?
> >
> > Cheers
> >>
> >> Cheers Andreas.
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>
> >
> > --
> > Lo?c Dachary, Artisan Logiciel Libre
> >
> 
> --
> Lo?c Dachary, Artisan Logiciel Libre
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux