Re: Adding LZ4 compression support to Btrfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 26, 2021 at 10:36:53AM +0100, David Sterba wrote:
> On Thu, Feb 25, 2021 at 10:50:56AM -0800, Eric Biggers wrote:
> > On Thu, Feb 25, 2021 at 02:26:47PM +0100, David Sterba wrote:
> > > 
> > > LZ4 support has been asked for so many times that it has it's own FAQ
> > > entry:
> > > https://btrfs.wiki.kernel.org/index.php/FAQ#Will_btrfs_support_LZ4.3F
> > > 
> > > The decompression speed is not the only thing that should be evaluated,
> > > the way compression works in btrfs (in 4k blocks) does not allow good
> > > compression ratios and overall LZ4 does not do much better than LZO. So
> > > this is not worth the additional costs of compatibility. With ZSTD we
> > > got the high compression and recently there have been added real-time
> > > compression levels that we'll use in btrfs eventually.
> > 
> > When ZSTD support was being added to btrfs, it was claimed that btrfs compresses
> > up to 128KB at a time
> > (https://lore.kernel.org/r/5a7c09dd-3415-0c00-c0f2-a605a0656499@xxxxxx).
> > So which is it -- 4KB or 128KB?
> 
> Logical extent ranges are sliced to 128K that are submitted to the
> compression routine. Then, the whole range is fed by 4K (or more exactly
> by page sized chunks) to the compression. Depending on the capabilities
> of the compression algorithm, the 4K chunks are either independent or
> can reuse some internal state of the algorithm.
> 
> LZO and LZ4 use some kind of embedded dictionary in the same buffer, and
> references to that dictionary directly. Ie. assuming the whole input
> range to be contiguous. Which is something that's not trivial to achive
> in kernel because of pages that are not contiguous in general.
> 
> Thus, LZO and LZ4 compress 4K at a time, each chunk is independent. This
> results in worse compression ratio because of less data reuse
> possibilities. OTOH this allows decompression in place.

Sorry about the noise before. I misread btrfs LZO implementation.
Yet it sounds that approach has lower CR than compress 128kb as
a while. In principle it can archive decompress in-place (margin
by a whole lzo chunk), yet LZ4/LZO algorithm can have a more
accurate lower inplace margin in math.

> 
> ZLIB and ZSTD can have a separate dictionary and don't need the input
> chunks to be contiguous. This brings some additional overhead like
> copying parts of the input to the dictionary and additional memory for
> themporary structures, but with higher compression ratios.
> 
> IIRC the biggest problem for LZ4 was the cost of setting up each 4K
> chunk, the work memory had to be zeroed. The size of the work memory is
> tunable but trading off compression ratio. Either way it was either too
> slow or too bad.

May I ask why LZ4 needs to zero the work memory (if you mean dest
buffer and LZ4_decompress_safe), just out of curiousity... I didn't
see that restriction before. Thanks!

Thanks,
Gao Xiang




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux