Re: Adding compression support for bluestore.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 21.03.2016 18:14, Allen Samuels wrote:

That's an interesting proposal but I can see following caveats here (I beg
pardon I  misunderstood something):
1) Potentially uncontrolled extent map growth when extensive (over)writing
takes place.
Yes, a naïve insertion policy could lead to uncontrolled growth, but I don't think this needs to be the case. I assume that when you add an "extent", you won't increase the size of the array unnecessarily, i.e., if the new extent doesn't overlap an existing extent then there's no reason to increase the size of the map array -- actually you want to insert the new extent at the <smallest> array index that doesn't overlap, only increasing the array size when that's not possible. I'm not 100% certain of the worst case, but I believe that it's limited to the ratio between the largest extent and the smallest extent. (i.e., if we assume writes are no larger than -- say -- 1MB and the smallest are 4K, then I think the max depth of the array is 1M/4K => 2^8, 256. Which is ugly but not awful -- since this is probably a contrived case. This might be a reason to limit the largest extent size to something a bit smaller (say 256K)...
It looks like I misunderstood something... It seemed to me that your array grows depending on the maximum amount of block versions.
Imagine you have 1000 writes 0~4K and 1000 writes 8K~4K
I supposed that this will create following array:
[
0: <0:{...,4K}, 8K:{...4K}>,
...
999: <0:{...,4K}, 8K:{...4K}>,
]

what's happening in your case?

2) Read/Lookup algorithmic complexity. To find valid block (or detect
overwrite) one should sequentially enumerate the full array. Given 1) that
might be very ineffective.
Only requires one log2 lookup for each index of the array.
This depends on 1) thus still unclear at the moment.
3) It's not dealing with unaligned overwrites. What happens when some
block is partially overwritten?
I'm not sure I understand what cases you're referring to. Can you give an example?

Well, as far as I understand in the proposal above you were operating the entire blocks (i.e. 4K data) Thus overwriting the block is a simple case - you just need to create a new block "version" and insert it into an array.
But real user writes seems to be unaligned to block size.
E.g.
write 0~2048
write 1024~3072

you have to either track both blocks or merge them. The latter is a bit tricky for the compression case.




--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux