On Fri, Feb 26, 2021 at 07:28:54PM +0800, Gao Xiang wrote: > On Fri, Feb 26, 2021 at 10:36:53AM +0100, David Sterba wrote: > > On Thu, Feb 25, 2021 at 10:50:56AM -0800, Eric Biggers wrote: > > > On Thu, Feb 25, 2021 at 02:26:47PM +0100, David Sterba wrote: > > > > > > > > LZ4 support has been asked for so many times that it has it's own FAQ > > > > entry: > > > > https://btrfs.wiki.kernel.org/index.php/FAQ#Will_btrfs_support_LZ4.3F > > > > > > > > The decompression speed is not the only thing that should be evaluated, > > > > the way compression works in btrfs (in 4k blocks) does not allow good > > > > compression ratios and overall LZ4 does not do much better than LZO. So > > > > this is not worth the additional costs of compatibility. With ZSTD we > > > > got the high compression and recently there have been added real-time > > > > compression levels that we'll use in btrfs eventually. > > > > > > When ZSTD support was being added to btrfs, it was claimed that btrfs compresses > > > up to 128KB at a time > > > (https://lore.kernel.org/r/5a7c09dd-3415-0c00-c0f2-a605a0656499@xxxxxx). > > > So which is it -- 4KB or 128KB? > > > > Logical extent ranges are sliced to 128K that are submitted to the > > compression routine. Then, the whole range is fed by 4K (or more exactly > > by page sized chunks) to the compression. Depending on the capabilities > > of the compression algorithm, the 4K chunks are either independent or > > can reuse some internal state of the algorithm. > > > > LZO and LZ4 use some kind of embedded dictionary in the same buffer, and > > references to that dictionary directly. Ie. assuming the whole input > > range to be contiguous. Which is something that's not trivial to achive > > in kernel because of pages that are not contiguous in general. > > > > Thus, LZO and LZ4 compress 4K at a time, each chunk is independent. This > > results in worse compression ratio because of less data reuse > > possibilities. OTOH this allows decompression in place. > > Sorry about the noise before. I misread btrfs LZO implementation. > Yet it sounds that approach has lower CR than compress 128kb as > a while. In principle it can archive decompress in-place (margin > by a whole lzo chunk), yet LZ4/LZO algorithm can have a more > accurate lower inplace margin in math. > > > > > ZLIB and ZSTD can have a separate dictionary and don't need the input > > chunks to be contiguous. This brings some additional overhead like > > copying parts of the input to the dictionary and additional memory for > > themporary structures, but with higher compression ratios. > > > > IIRC the biggest problem for LZ4 was the cost of setting up each 4K > > chunk, the work memory had to be zeroed. The size of the work memory is > > tunable but trading off compression ratio. Either way it was either too > > slow or too bad. > > May I ask why LZ4 needs to zero the work memory (if you mean dest > buffer and LZ4_decompress_safe), just out of curiousity... I didn't > see that restriction before. Thanks! Oh, looking back again, there is a difference between kernel LZ4 code [1] and lz4 upstream[2] that I didn't notice. If "work memory" above is that and I understand correctly, no need to zero that memory except something unique occurs to the kernel implementation itself (Also, it seems that f2fs compression doesn't zero it when using at least [3], although I never tried such LZ4 kernel-specific compress interface before.) [1] https://github.com/lz4/lz4/blob/dev/lib/lz4.c#L1373 [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/lib/lz4/lz4_compress.c#n511 [3] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/f2fs/compress.c#n262 Thanks, Gao Xiang > > Thanks, > Gao Xiang >