RE: Adding compression/checksum support for bluestore.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 5 Apr 2016, Allen Samuels wrote:
> > -----Original Message-----
> > From: Sage Weil [mailto:sage@xxxxxxxxxxxx]
> > Sent: Tuesday, April 05, 2016 5:36 AM
> > To: Allen Samuels <Allen.Samuels@xxxxxxxxxxx>
> > Cc: Chris Dunlop <chris@xxxxxxxxxxxx>; Igor Fedotov
> > <ifedotov@xxxxxxxxxxxx>; ceph-devel <ceph-devel@xxxxxxxxxxxxxxx>
> > Subject: RE: Adding compression/checksum support for bluestore.
> > 
> > On Mon, 4 Apr 2016, Allen Samuels wrote:
> > > But there's an approximation that gets the job done for us.
> > >
> > > When U is VERY SMALL (this will always be true for us :)).
> > >
> > > The you can approximate 1-(1-U)^D as D * U.  (for even modest values
> > > of U (say 10-5), this is a very good approximation).
> > >
> > > Now the math is easy.
> > >
> > > The odds of failure for reading a block of size D is now D * U, with
> > > checksum correction it becomes (D * U) / (2^C).
> > >
> > > It's now clear that if you double the data size, you need to add one
> > > bit to your checksum to compensate.
> > >
> > > (Again, the actual math is less than 1 bit, but in the range we care
> > > about 1 bit will always do it).
> > >
> > > Anyways, that's what we worked out.
> > 
> > D = block size, U = hw UBER, C = checksum.  Let's add N = number of bits you
> > actually want to read.  In that case, we have to read (N / D) blocks of D bits,
> > and we get
> > 
> > P(reading N bits and getting some bad data and not knowing it)
> > 	= (D * U) / (2^C) * (N / D)
> > 	= U * N / 2^C
> > 
> > and the D term (block size) disappears.  IIUC this is what Chris was originally
> > getting at.  The block size affects the probability I get an error on one block,
> > but if I am a user reading something, you don't care about block size--you
> > care about how much data you want to read.  I think in that case it doesn't
> > really matter (modulo rounding error, minimum read size, how precisely we
> > can locate the error, etc.).
> > 
> > Is that right?
> 
> It's a "Bit Error Rate", not an "I/O error rate" -- it doesn't matter 
> how you chunk of the bits into blocks and I/O operations.

Right.  And you use it to calculate "The odds of failure for reading a 
block of size D", but I'm saying that the user doesn't care about D (which 
is an implementation detail).  They care about N, the amoutn of data they 
want to read.  And when you calculate the probability of getting bad data 
after reading *N* bits, it has nothing to do with D.

Does that make sense?

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux