Re: Adding Data-At-Rest compression support to Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 24.09.2015 18:34, Sage Weil wrote:
I was also assuming each stripe unit would be independently compressed, but I didn't think about the efficiency. This approach implies that you'd want a relatively large stripe size (100s of KB or more). Hmm, a quick google search suggests the zlib compression window is only 32KB anyway, which isn't so big. The more aggressive algorithms probably aren't what people would reach for anyway for CPU utilization reasons... I guess? sage

There is probably no need in strict alignment with the stripe size. We can use block sizes that client provides on write dynamically. If some client writes in stripes - then we compress that block. If others use larger blocks ( e.g. caching agent on flush) - we can use that size or split the provided block into several smaller chunks ( e.g. up to max N*stripe_size ) for overhead reduction on random read. Even if client uses dynamic block sizes ( low level RADOS use?) we can rely on them some way without static bind to stripe size. Surely this is much easier when appends are permitted only. General "random writes" case will be more complex.

Thanks,
Igor
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux