The patch titled Subject: Revert "mm: slowly shrink slabs with a relatively small number of objects" has been added to the -mm tree. Its filename is revert-mm-slowly-shrink-slabs-with-a-relatively-small-number-of-objects.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/revert-mm-slowly-shrink-slabs-with-a-relatively-small-number-of-objects.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/revert-mm-slowly-shrink-slabs-with-a-relatively-small-number-of-objects.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Dave Chinner <dchinner@xxxxxxxxxx> Subject: Revert "mm: slowly shrink slabs with a relatively small number of objects" This reverts commit 172b06c32b949759fe6313abec514bc4f15014f4. This change changes the agressiveness of shrinker reclaim, causing small cache and low priority reclaim to greatly increase scanning pressure on small caches. As a result, light memory pressure has a disproportionate affect on small caches, and causes large caches to be reclaimed much faster than previously. As a result, it greatly perturbs the delicate balance of the VFS caches (dentry/inode vs file page cache) such that the inode/dentry caches are reclaimed much, much faster than the page cache and this drives us into several other caching imbalance related problems. As such, this is a bad change and needs to be reverted. [Needs some massaging to retain the later seekless shrinker modifications.] Link: http://lkml.kernel.org/r/20190130041707.27750-3-david@xxxxxxxxxxxxx Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> Cc: Roman Gushchin <guro@xxxxxx> Cc: Spock <dairinin@xxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/mm/vmscan.c~revert-mm-slowly-shrink-slabs-with-a-relatively-small-number-of-objects +++ a/mm/vmscan.c @@ -491,16 +491,6 @@ static unsigned long do_shrink_slab(stru delta = freeable / 2; } - /* - * Make sure we apply some minimal pressure on default priority - * even on small cgroups. Stale objects are not only consuming memory - * by themselves, but can also hold a reference to a dying cgroup, - * preventing it from being reclaimed. A dying cgroup with all - * corresponding structures like per-cpu stats and kmem caches - * can be really big, so it may lead to a significant waste of memory. - */ - delta = max_t(unsigned long long, delta, min(freeable, batch_size)); - total_scan += delta; if (total_scan < 0) { pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n", _ Patches currently in -mm which might be from dchinner@xxxxxxxxxx are revert-mm-dont-reclaim-inodes-with-many-attached-pages.patch revert-mm-slowly-shrink-slabs-with-a-relatively-small-number-of-objects.patch