Ensure that shrinkers are given the option to completely drop their caches even when their caches are smaller than the batch size. This change helps improve memory headroom by ensuring that under significant memory pressure shrinkers can drop all of their caches. This change only attempts to more aggressively call the shrinkers during background memory reclaim, inorder to avoid hurting the performance of direct memory reclaim. Signed-off-by: Sudarshan Rajagopalan <sudaraja@xxxxxxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/vmscan.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 9727dd8e2581..35973665ae64 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -424,6 +424,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, long batch_size = shrinker->batch ? shrinker->batch : SHRINK_BATCH; long scanned = 0, next_deferred; + long min_cache_size = batch_size; + + if (current_is_kswapd()) + min_cache_size = 0; if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) nid = 0; @@ -503,7 +507,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, * scanning at high prio and therefore should try to reclaim as much as * possible. */ - while (total_scan >= batch_size || + while (total_scan > min_cache_size || total_scan >= freeable) { unsigned long ret; unsigned long nr_to_scan = min(batch_size, total_scan); -- Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project