The patch titled Subject: mm,vmscan: accumulated slab reclaim pressure fixes has been added to the -mm tree. Its filename is mmslabvmscan-accumulate-gradual-pressure-on-small-slabs-fix-2.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mmslabvmscan-accumulate-gradual-pressure-on-small-slabs-fix-2.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mmslabvmscan-accumulate-gradual-pressure-on-small-slabs-fix-2.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Rik van Riel <riel@xxxxxxxxxxx> Subject: mm,vmscan: accumulated slab reclaim pressure fixes Jonathan asked about a divide by zero if a slab with shrinker->seeks==0 comes into the new code path, which appears to be possible because the shrinker list is locked with a read-write semaphore, that allows multiple parallel reclaimers. This led me to take another closer look at the code, and find a possibility for a small slab with a shrinker->seeks > 4 to not have any items reclaimed from it at all. This patch fixes both of these bugs. Link: http://lkml.kernel.org/r/20190129142831.6a373403@xxxxxxxxxxxxxxxxxxxx Signed-off-by: Rik van Riel <riel@xxxxxxxxxxx> Suggested-by: Jonathan Lemon <bsd@xxxxxx> Cc: Chris Mason <clm@xxxxxx> Cc: Roman Gushchin <guro@xxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/vmscan.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) --- a/mm/vmscan.c~mmslabvmscan-accumulate-gradual-pressure-on-small-slabs-fix-2 +++ a/mm/vmscan.c @@ -500,14 +500,17 @@ static unsigned long do_shrink_slab(stru * corresponding structures like per-cpu stats and kmem caches * can be really big, so it may lead to a significant waste of memory. */ - if (!delta) { - shrinker->small_scan += freeable; + if (!delta && shrinker->seeks) { + unsigned long nr_considered; - delta = shrinker->small_scan >> priority; - shrinker->small_scan -= delta << priority; + shrinker->small_scan += freeable; + nr_considered = shrinker->small_scan >> priority; - delta *= 4; + delta = 4 * nr_considered; do_div(delta, shrinker->seeks); + + if (delta) + shrinker->small_scan -= nr_considered << priority; } total_scan += delta; _ Patches currently in -mm which might be from riel@xxxxxxxxxxx are mmslabvmscan-accumulate-gradual-pressure-on-small-slabs.patch mmslabvmscan-accumulate-gradual-pressure-on-small-slabs-fix-2.patch