The patch titled mm: vmscan: immediately reclaim end-of-LRU dirty pages when writeback completes has been added to the -mm tree. Its filename is mm-vmscan-immediately-reclaim-end-of-lru-dirty-pages-when-writeback-completes.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** See http://userweb.kernel.org/~akpm/stuff/added-to-mm.txt to find out what to do about this The current -mm tree may be found at http://userweb.kernel.org/~akpm/mmotm/ ------------------------------------------------------ Subject: mm: vmscan: immediately reclaim end-of-LRU dirty pages when writeback completes From: Mel Gorman <mgorman@xxxxxxx> When direct reclaim encounters a dirty page, it gets recycled around the LRU for another cycle. This patch marks the page PageReclaim similar to deactivate_page() so that the page gets reclaimed almost immediately after the page gets cleaned. This is to avoid reclaiming clean pages that are younger than a dirty page encountered at the end of the LRU that might have been something like a use-once page. Signed-off-by: Mel Gorman <mgorman@xxxxxxx> Acked-by: Johannes Weiner <jweiner@xxxxxxxxxx> Cc: Dave Chinner <david@xxxxxxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx> Cc: Wu Fengguang <fengguang.wu@xxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: Minchan Kim <minchan.kim@xxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Cc: Alex Elder <aelder@xxxxxxx> Cc: Theodore Ts'o <tytso@xxxxxxx> Cc: Chris Mason <chris.mason@xxxxxxxxxx> Cc: Dave Hansen <dave@xxxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <> --- include/linux/mmzone.h | 2 +- mm/vmscan.c | 10 +++++++++- mm/vmstat.c | 2 +- 3 files changed, 11 insertions(+), 3 deletions(-) diff -puN include/linux/mmzone.h~mm-vmscan-immediately-reclaim-end-of-lru-dirty-pages-when-writeback-completes include/linux/mmzone.h --- a/include/linux/mmzone.h~mm-vmscan-immediately-reclaim-end-of-lru-dirty-pages-when-writeback-completes +++ a/include/linux/mmzone.h @@ -100,7 +100,7 @@ enum zone_stat_item { NR_UNSTABLE_NFS, /* NFS unstable pages */ NR_BOUNCE, NR_VMSCAN_WRITE, - NR_VMSCAN_WRITE_SKIP, + NR_VMSCAN_IMMEDIATE, /* Prioritise for reclaim when writeback ends */ NR_WRITEBACK_TEMP, /* Writeback using temporary buffers */ NR_ISOLATED_ANON, /* Temporary isolated pages from anon lru */ NR_ISOLATED_FILE, /* Temporary isolated pages from file lru */ diff -puN mm/vmscan.c~mm-vmscan-immediately-reclaim-end-of-lru-dirty-pages-when-writeback-completes mm/vmscan.c --- a/mm/vmscan.c~mm-vmscan-immediately-reclaim-end-of-lru-dirty-pages-when-writeback-completes +++ a/mm/vmscan.c @@ -867,7 +867,15 @@ static unsigned long shrink_page_list(st */ if (page_is_file_cache(page) && (!current_is_kswapd() || priority >= DEF_PRIORITY - 2)) { - inc_zone_page_state(page, NR_VMSCAN_WRITE_SKIP); + /* + * Immediately reclaim when written back. + * Similar in principal to deactivate_page() + * except we already have the page isolated + * and know it's dirty + */ + inc_zone_page_state(page, NR_VMSCAN_IMMEDIATE); + SetPageReclaim(page); + goto keep_locked; } diff -puN mm/vmstat.c~mm-vmscan-immediately-reclaim-end-of-lru-dirty-pages-when-writeback-completes mm/vmstat.c --- a/mm/vmstat.c~mm-vmscan-immediately-reclaim-end-of-lru-dirty-pages-when-writeback-completes +++ a/mm/vmstat.c @@ -702,7 +702,7 @@ const char * const vmstat_text[] = { "nr_unstable", "nr_bounce", "nr_vmscan_write", - "nr_vmscan_write_skip", + "nr_vmscan_immediate_reclaim", "nr_writeback_temp", "nr_isolated_anon", "nr_isolated_file", _ Patches currently in -mm which might be from mgorman@xxxxxxx are mm-compaction-trivial-clean-up-in-acct_isolated.patch mm-change-isolate-mode-from-define-to-bitwise-type.patch mm-compaction-make-isolate_lru_page-filter-aware.patch mm-zone_reclaim-make-isolate_lru_page-filter-aware.patch mm-migration-clean-up-unmap_and_move.patch mm-page-writebackc-make-determine_dirtyable_memory-static-again.patch mm-vmscan-do-not-writeback-filesystem-pages-in-direct-reclaim.patch mm-vmscan-remove-dead-code-related-to-lumpy-reclaim-waiting-on-pages-under-writeback.patch xfs-warn-if-direct-reclaim-tries-to-writeback-pages.patch ext4-warn-if-direct-reclaim-tries-to-writeback-pages.patch mm-vmscan-do-not-writeback-filesystem-pages-in-kswapd-except-in-high-priority.patch mm-vmscan-throttle-reclaim-if-encountering-too-many-dirty-pages-under-writeback.patch mm-vmscan-immediately-reclaim-end-of-lru-dirty-pages-when-writeback-completes.patch hugepages-fix-race-between-hugetlbfs-umount-and-quota-update-checkpatch-fixes.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html