Re: [RFC] Snappy compressor for Linux Kernel (specifically, zram)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Apr 16, 2011 at 02:11, Dan Magenheimer
<dan.magenheimer@xxxxxxxxxx> wrote:
>> From: Greg KH [mailto:greg@xxxxxxxxx]
>> Cc: devel@xxxxxxxxxxxxxxxxxxxxxx; Nitin Gupta; Dan Magenheimer
>> Subject: Re: [RFC] Snappy compressor for Linux Kernel (specifically,
>> zram)
>>
>> On Sat, Apr 16, 2011 at 12:45:41AM +0300, Zeev Tarantov wrote:
>> > On Fri, Apr 15, 2011 at 05:21, Greg KH <greg@xxxxxxxxx> wrote:
>> > > Why is this needed to be added to the kernel?  What does it provide
>> that
>> > > users or other parts of the kernel needs?
>> >
>> > It is functionally a general data compression tool that trades off
>> > compression ratio for speed. It is optimized for x86-64 and there
>> > achieves compression at 250MB/sec and decompression at 500MB/sec
>> > (YMMV*). This is a better mousetrap, that can and should replace LZO
>> > in every place where the kernel currently uses LZO.
>>
>> Like where?
>>
>> Have you done so and found it really to be faster and smaller?  If so,
>> benchmarks and the numbers will be required for this to be accepted.
>>
>> You need to show a solid use case for why to switch to this code in
>> order to have it accepted.
>
> In particular, zram and zcache both operate on a page (4K) granularity,
> so it would be interesting to see ranges of performance vs compression
> of snappy vs LZO on a large test set of binary and text pages.  I mean
> one page per test... I'm no expert but I believe some compression
> algorithms have a larger startup overhead so may not be as useful
> for compressing "smaller" (e.g. 4K) streams.

Neither LZO nor Snappy do things like first transmitting the huffman
trees before the data itself, so there are no startup costs. They're
just too fast to use huffman coding.
I can quickly make a user space tester that compresses the input 4KB at a time.

> Also zram and zcache benefit greatly from a good compression ratio
> to pack more compressed pages into physical pages.  Better compression
> means more pages saved in RAM, which means fewer disk accesses.
> So the tradeoff of compression ratio vs CPU-cycles-to-compress
> is difficult to evaluate without real benchmarks and the results
> may be counterintuitive.

I have linked to benchmark results of the kernel running Snappy using
zram, and an ext4 filesystem containing the untar'd source tree of
qt-4.7.1 shows this:

LZO zram:
orig_data_size	645918720
compr_data_size	320624925
mem_used_total	326627328

Snappy zram:
orig_data_size	645914624
compr_data_size	326040602
mem_used_total	332374016

That's 1.76% difference in memory use.
I have just copied a linux-2.6 directory with git tree, object files
etc. (1.2GB) to zram:

With LZO:
orig_data_size 1269739520
compr_data_size 850054231
mem_used_total 855568384

With Snappy:
orig_data_size 1269727232
compr_data_size 861550782
mem_used_total 867438592

1.38% bigger.

I don't think the compression ratio is much worse.

> Dan
>
_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxx
http://driverdev.linuxdriverproject.org/mailman/listinfo/devel



[Index of Archives]     [Linux Driver Backports]     [DMA Engine]     [Linux GPIO]     [Linux SPI]     [Video for Linux]     [Linux USB Devel]     [Linux Coverity]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux