On Wed, 2017-05-03 at 14:38 -0400, Josef Bacik wrote: > > > > > + if (nr_inactive > total_high_wmark && nr_inactive > > > > nr_slab) > > > + skip_slab = true; > > > > I worry that this may be a little too aggressive, > > and result in the slab cache growing much larger > > than it should be on some systems. > > > > I wonder if it may make more sense to have the > > aggressiveness of slab scanning depend on the > > ratio of inactive to reclaimable slab pages, rather > > than having a hard cut-off like this? > > > > So I originally had a thing that kept track of the rate of change of > inactive vs > slab between kswapd runs, but this worked fine so I figured simpler > was better. > Keep in mind that we only skip slab the first loop through, so if we > fail to > free enough on the inactive list the first time through then we start > evicting > slab as well. The idea is (and my testing bore this out) that with > the new size > ratio way of shrinking slab we would sometimes be over zealous and > evict slab > that we were actively using, even though we had reclaimed plenty of > pages from > our inactive list to satisfy our sc->nr_to_reclaim. My worry is that, since we try to keep the active to inactive ratio about equal for file pages, many systems could end up with equal amounts of active file pages, inactive file pages, and reclaimable slab. That could not be a gigantic waste of memory for many workloads, but it could also exacerbate the "reclaim slab objects forever without freeing any memory" problem once we do need the memory for something else later on. > I could probably change the ratio in the sc->inactive_only case to be > based on > the slab to inactive ratio and see how that turns out, I'll get that > wired up > and let you know how it goes. Thanks, Looking forward to it. I am glad to see this problem being attacked :) -- All rights reversed
Attachment:
signature.asc
Description: This is a digitally signed message part