Now that reclaiming anon memory is more prevelant (Johannes describes this well in commit f53af4285d77 ("mm: vmscan: fix extreme overreclaim and swap floods")), we've been seeing large bursts (sometimes in the order of multiple GiBs) of anon memory being reclaimed despite swappiness being very low (=1) and there being plenty of page cache remaining. Johannes commit f53af4285d77 ("mm: vmscan: fix extreme overreclaim and swap floods"), helped reduce these swap storms; however, it did not fully curb this effect. Under further investigation I noticed these swap storms correspond to the activation of file_is_tiny. file_is_tiny is being computed on a per-node basis, if reclaim drains the page cache on one node, and the scheduler is prefering new allocations on a separate node, file_is_tiny will remain elevated for a very long time, constantly draining anon from the node that is low on page cache. These burst of reclaim are also seen in the single node case, where once file_is_tiny=1, anon reclaim is too aggressive with a low swap value. Reduce these extreme bursts of anon reclaim by scaling the total_high_wmark down by the reclaim priority. This will activate file_is_tiny way less often, and for smaller bursts. Fixes: ccc5dc67340c ("mm/vmscan: make active/inactive ratio as 1:1 for anon lru") Fixes: 5df741963d52 ("mm: fix LRU balancing effect of new transparent huge pages") Signed-off-by: Nico Pache <npache@xxxxxxxxxx> --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 026199c047e0..0d288bb5354e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2882,7 +2882,7 @@ static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) anon = node_page_state(pgdat, NR_INACTIVE_ANON); sc->file_is_tiny = - file + free <= total_high_wmark && + file + free <= (total_high_wmark >> sc->priority) && !(sc->may_deactivate & DEACTIVATE_ANON) && anon >> sc->priority; } -- 2.38.1