RE: [Lsf] [LSF][MM] page allocation & direct reclaim latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 1. LRU ordering - are we aging pages properly or recycling through the
>    list too aggressively? The high_wmark*8 change made recently was
>    partially about list rotations and the associated cost so it might
>    be worth listing out whatever issues people are currently aware of.

Here's one: zcache (and tmem RAMster and SSmem) is essentially a level2
cache for clean page cache pages that have been reclaimed.  (Or
more precisely, the pageFRAME has been reclaimed, but the contents
has been squirreled away in zcache.)

Just like the active/inactive lists, ideally, you'd like to ensure
zcache gets filled with pages that have some probability of being used
in the future, not pages you KNOW won't be used in the future but
have left on the inactive list to rot until they are reclaimed.

There's also a sizing issue... under memory pressure, pages in
active/inactive have different advantages/disadvantages vs
pages in zcache/etc... What tuning knobs exist already?

I hacked a (non-upstreamable) patch to only "put" clean pages
that had been previously in active, to play with this a bit but
didn't pursue it.

Anyway, would like to include this in the above discussion.

Thanks,
Dan

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]