This is a note to let you know that I've just added the patch titled mm: vmscan: do not swap anon pages just because free+file is low to the 3.14-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: mm-vmscan-do-not-swap-anon-pages-just-because-free-file-is-low.patch and it can be found in the queue-3.14 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 0bf1457f0cfca7bc026a82323ad34bcf58ad035d Mon Sep 17 00:00:00 2001 From: Johannes Weiner <hannes@xxxxxxxxxxx> Date: Tue, 8 Apr 2014 16:04:10 -0700 Subject: mm: vmscan: do not swap anon pages just because free+file is low From: Johannes Weiner <hannes@xxxxxxxxxxx> commit 0bf1457f0cfca7bc026a82323ad34bcf58ad035d upstream. Page reclaim force-scans / swaps anonymous pages when file cache drops below the high watermark of a zone in order to prevent what little cache remains from thrashing. However, on bigger machines the high watermark value can be quite large and when the workload is dominated by a static anonymous/shmem set, the file set might just be a small window of used-once cache. In such situations, the VM starts swapping heavily when instead it should be recycling the no longer used cache. This is a longer-standing problem, but it's more likely to trigger after commit 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy") because file pages can no longer accumulate in a single zone and are dispersed into smaller fractions among the available zones. To resolve this, do not force scan anon when file pages are low but instead rely on the scan/rotation ratios to make the right prediction. Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> Acked-by: Rafael Aquini <aquini@xxxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Suleiman Souhlal <suleiman@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- mm/vmscan.c | 16 +--------------- 1 file changed, 1 insertion(+), 15 deletions(-) --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1848,7 +1848,7 @@ static void get_scan_count(struct lruvec struct zone *zone = lruvec_zone(lruvec); unsigned long anon_prio, file_prio; enum scan_balance scan_balance; - unsigned long anon, file, free; + unsigned long anon, file; bool force_scan = false; unsigned long ap, fp; enum lru_list lru; @@ -1902,20 +1902,6 @@ static void get_scan_count(struct lruvec get_lru_size(lruvec, LRU_INACTIVE_FILE); /* - * If it's foreseeable that reclaiming the file cache won't be - * enough to get the zone back into a desirable shape, we have - * to swap. Better start now and leave the - probably heavily - * thrashing - remaining file pages alone. - */ - if (global_reclaim(sc)) { - free = zone_page_state(zone, NR_FREE_PAGES); - if (unlikely(file + free <= high_wmark_pages(zone))) { - scan_balance = SCAN_ANON; - goto out; - } - } - - /* * There is enough inactive page cache, do not reclaim * anything from the anonymous working set right now. */ Patches currently in stable-queue which might be from hannes@xxxxxxxxxxx are queue-3.14/mm-page_alloc-spill-to-remote-nodes-before-waking-kswapd.patch queue-3.14/mm-vmscan-do-not-swap-anon-pages-just-because-free-file-is-low.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html