Re: swap, compress, discard: what's in the future?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 7, 2014 at 11:01 AM, Minchan Kim <minchan@xxxxxxxxxx> wrote:
> Hello Luigi,
>
> On Mon, Jan 06, 2014 at 06:31:29PM -0800, Luigi Semenzato wrote:
>> I would like to know (and I apologize if there is an obvious answer)
>> if folks on this list have pointers to documents or discussions
>> regarding the long-term evolution of the Linux memory manager.  I
>> realize there is plenty of shorter-term stuff to worry about, but a
>> long-term vision would be helpful---even more so if there is some
>> agreement.
>>
>> My super-simple view is that when memory reclaim is possible there is
>> a cost attached to it, and the goal is to minimize the cost.  The cost
>> for reclaiming a unit of memory of some kind is a function of various
>> parameters: the CPU cycles, the I/O bandwidth, and the latency, to
>> name the main components.  This function can change a lot depending on
>> the load and in practice it may have to be grossly approximated, but
>> the concept is valid IMO.
>>
>> For instance, the cost of compressing and decompressing RAM is mainly
>> CPU cycles.  A user program (a browser, for instance :) may be caching
>> decompressed JPEGs into transcendent (discardable) memory, for quick
>> display.  In this case, almost certainly the decompressed JPEGs should
>> be discarded before memory is compressed, under the realistic
>> assumption that one JPEG decompression is cheaper than one LZO
>> compression/decompression.  But there may be situations in which a lot
>> more work has gone into creating the application cache, and then it
>> makes sense to compress/decompress it rather than discard it.  It may
>> be hard for the kernel to figure out how expensive it is to recreate
>> the application cache, so the application should tell it.
>
> Agreed. It's very hard for kernel to figure it out so VM should depend
> on user's hint. and thing you said is the exact example of volatile
> range system call that I am suggesting.
>
> http://lwn.net/Articles/578761/
>
>>
>> Of course, for a cache the cost needs to be multiplied by the
>> probability that the memory will be used again in the future.  A good
>> part of the Linux VM is dedicated to estimating that probability, for
>> some kinds of memory.  But I don't see simple hooks for describing
>> various costs such as the one I mentioned, and I wonder if this
>> paradigm makes sense in general, or if it is peculiar to Chrome OS.
>
> Your statement makes sense to me but unfortunately, current VM doesn't
> consider everything you mentioned.
> It is just based on page access recency by approximate LRU logic +
> some heuristic(ex, mapped page and VM_EXEC pages are more precious).

It seems that the ARC page replacement algorithm in zfs have good
performance and more intelligent.
http://en.wikipedia.org/wiki/Adaptive_replacement_cache
Is there any history reason of linux didn't implement something like
ARC as the page cache replacement algorithm?

> The reason it makes hard is just complexity/overhead of implementation.
> If someone has nice idea to define parameters and implement with
> small overhead, it would be very nice!
>

-- 
Regards,
--Bob

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]