Re: [PATCH RFC v2 0/2] mTHP-friendly compression in zsmalloc and zram based on multi-pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On (24/11/12 09:31), Barry Song wrote:
[..]
> > Do you have any data how this would perform with the upstream kernel, i.e. without
> > a large folio pool and the workaround and if large granularity compression is worth having
> > without those patches?
> 
> I’d say large granularity compression isn’t a problem, but large
> granularity decompression
> could be.
> 
> The worst case would be if we swap out a large block, such as 16KB,
> but end up swapping in
> 4 times due to allocation failures, falling back to smaller folios. In
> this scenario, we would need
> to perform three redundant decompressions. I will work with Tangquan
> to provide this data this
> week.

Well, apart from that... I sort of don't know.

This seems to be exclusively for swap case (or do file-systems use
mTHP too?) and zram/zsmalloc don't really focus on one particular
usage scenario, pretty much all of our features can be used regardless
of what zram is backing up - be it a swap partition or a mounted fs.

Another thing is that I don't see how to integrate these large
objects support with post-processig: recompression and writeback.
Well, recompression is okay-ish, I guess, but writeback is not.
Writeback works in PAGE_SIZE units; we get that worst case scenario
here.  So, yeah, there are many questions.

p.s. Sorry for late reply.  I just started looking at the series and
don't have any solid opinions yet.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux