RE: Hitchhiker erasure code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Loic, 
I looked at that some time ago.

Table 1 in the paper says it all:

If you care about decoding and reconstruction of data it gives a good improvement.
If you care mainly about encoding speed, it is not the optimal choice (+72.1%).

The algorithm optimizes the reconstruction of data units. This is relevant if your read-size is typically smaller than the block-size e.g. you encode 4 MB objects and you read 4kb pages. With normal EC you get a read amplification of K*4k if a data stripe is down, while with hitchhiker you get only 2/3 of that traffic in case of (10,4).

The most interesting to implement is probably Hitchhiker-XOR+, which you have to combine with a Vandermonde matrix, it requires that the first parity is just the xor of all data chunks.

So, yes, there is certainly a benefit in implementing that compared to other approaches (Xorbas,LRC) since it does not involve a space overhead and opens the door to use larger K values and save space!

Cheers Andreas.






 

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux