Re: swapping file pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Luigi,

On Fri, Apr 21, 2017 at 11:26:45AM -0700, Luigi Semenzato wrote:
> On an SSD used by a typical chromebook (i.e. the one on my desk right
> now), it takes about 300us to read a random 4k page, but it takes less
> than 10us to lzo-decompress a page from the zram device.

IMO, it should be solved by VM itself rather than adding another layer
I mentioned below. IOW, If VM found file-backed page's reclaim/refaut
cost is higher than anonymous, VM should tip toward anonymous LRU.
I guess upcoming patches from Johannes's work would be key for the
issue.

> 
> Code compresses reasonably well (down to almost 50% for x86_64,
> although only 66% for ARM32), so I may be better off swapping file
> pages to zram, rather than reading them back from the SSD.  Before I
> even get started trying to do this, can anybody tell me if this is a
> stupid idea?  Or possibly a good idea, but totally impractical from an
> implementation perspective?

Although I believe it should be solved by VM itself in the long run,
I think cleancache might help you at this moment.

Please look at cleancache. It's hook layer of page cache so you can
compress pages dropped from page cache if FS supports cleancache_ops.
zcache was one of implementation for that. If the hit ratio is high,
it would be reasonable.

https://lwn.net/Articles/397574/

One of the problem at that time was cache miss ratio was too high
for stream-write workload because it have kept used-once pages
in the memory. It was pointelss.

Dan suggested PG_activated bit to detect pages promoted to active
LRU list for the life time and store only those pages in the
backend(ie, allocator) but he retired in the middle of the work.
Johannes's patch introduced PG_workingset which is same with
PG_activiated Dan suggested so maybe we can use the flag to avoid
overhead.

Another idea to use cleacache/frontswap although it didn't provide
cleancache backend at that time was GCMA. GCMA's main goal is
to guarantee get contiguous area in deterministic time. For that,
it used frontswap/cleancache concept.

http://events.linuxfoundation.org/sites/events/files/slides/gcma-guaranteed_contiguous_memory_allocator-lfklf2014_0.pdf

I hope it helps you a bit.

> 
> Thanks!
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux