Re: [PATCH 2/2][v2] mm: make kswapd try harder to keep active pages in cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 22, 2017 at 03:35:39PM -0400, josef@xxxxxxxxxxxxxx wrote:
> From: Josef Bacik <jbacik@xxxxxx>
> 
> While testing slab reclaim I noticed that if we were running a workload
> that used most of the system memory for it's working set and we start
> putting a lot of reclaimable slab pressure on the system (think find /,
> or some other silliness), we will happily evict the active pages over
> the slab cache.  This is kind of backwards as we want to do all that we
> can to keep the active working set in memory, and instead evict these
> short lived objects.  The same thing occurs when say you do a yum
> update of a few packages while your working set takes up most of RAM,
> you end up with inactive lists being relatively small and so we reclaim
> active pages even though we could reclaim these short lived inactive
> pages.

The fundament problem is we cannot identify what are working set and
short-lived objects in adavnce without enough aging so such workload
transition in a short time is really hard to catch up.

A idea in my mind is to create two level list(active, inactive list)
like LRU pages. Then, starts objects inactive list and doesn't promote
the object into active list unless it touches.

Once we see refault of page cache, it would be a good signal to
accelerate slab shrinking. Or, reclaim shrinker's inactive list firstly
before the shrinking page cache active list.
Same way have been used for page cache's inactive list to prevent
anonymous page reclaiming. See get_scan_count.

It's non trivial but worth to try if system with heavy slab objects
would be popular, IMHO.

> 
> My approach here is twofold.  First, keep track of the difference in
> inactive and slab pages since the last time kswapd ran.  In the first
> run this will just be the overall counts of inactive and slab, but for
> each subsequent run we'll have a good idea of where the memory pressure
> is coming from.  Then we use this information to put pressure on either
> the inactive lists or the slab caches, depending on where the pressure
> is coming from.

I don't like this idea.

The pressure should be fair if possible and victim decision should come
from the aging. If we want to put more pressure, it should come from
some feedback loop. And I don't think diff of allocation would be a good
factor for that.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux