RE: [RFC] mm: add support for zsmalloc and zcache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: Nitin Gupta [mailto:ngupta@xxxxxxxxxx]
> Subject: Re: [RFC] mm: add support for zsmalloc and zcache
> 
> The problem is that zbud performs well only when a (compressed) page is
> either PAGE_SIZE/2 - e or PAGE_SIZE - e, where e is small. So, even if
> the average compression ratio is 2x (which is hard to believe), a
> majority of sizes can actually end up in PAGE_SIZE/2 + e bucket and zbud
> will still give bad performance.  For instance, consider these histograms:

Whoa whoa whoa.  This is very wrong.  Zbud handles compressed pages
of any range that fits in a pageframe (same, almost, as zsmalloc).
Unless there is some horrible bug you found...

Zbud _does_ require the _distribution_ of zsize to be roughly
centered around PAGE_SIZE/2 (or less).  Is that what you meant?
If so, the following numbers you posted don't make sense to me.
Could you be more explicit on what the numbers mean?

Also, as you know, unlike zram, the architecture of tmem/frontswap
allows zcache to reject any page, so if the distribution of zsize
exceeds PAGE_SIZE/2, some pages can be rejected (and thus passed
through to swap).  This safety valve already exists in zcache (and zcache2)
to avoid situations where zpages would otherwise significantly
exceed half of total pageframes allocated.  IMHO this is a
better policy than accepting a large number of poorly-compressed pages,
i.e. if every data page compresses down from 4096 bytes to 4032
bytes, zsmalloc stores them all (thus using very nearly one pageframe
per zpage), whereas zbud avoids the anomalous page sequence altogether.
 
> # Created tar of /usr/lib (2GB) on a fairly loaded Linux system and
> compressed page-by-page using LZO:
> 
> # first two fields: bin start, end.  Third field: compressed size
> 32 286 7644
> :
> 3842 4096 3482
> 
> The only (approx) sweetspots for zbud are 1810-2064 and 3842-4096 which
> covers only a small fraction of pages.
> 
> # same page-by-page compression for 220MB ISO from project Gutenberg:
> 32 286 70
> :
> 3842 4096 804
> 
> Again very few pages in zbud favoring bins.
> 
> So, we really need zsmalloc style allocator which handles sizes all over
> the spectrum. But yes, compaction remains far easier to implement on zbud.

So it remains to be seen if a third choice exists (which might be either
an enhanced zbud or an enhanced zsmalloc), right?

Dan
_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxx
http://driverdev.linuxdriverproject.org/mailman/listinfo/devel


[Index of Archives]     [Linux Driver Backports]     [DMA Engine]     [Linux GPIO]     [Linux SPI]     [Video for Linux]     [Linux USB Devel]     [Linux Coverity]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux