The patch titled vmscan: page_check_references(): check low order lumpy reclaim properly has been added to the -mm tree. Its filename is vmscan-page_check_references-check-low-order-lumpy-reclaim-properly.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** See http://userweb.kernel.org/~akpm/stuff/added-to-mm.txt to find out what to do about this The current -mm tree may be found at http://userweb.kernel.org/~akpm/mmotm/ ------------------------------------------------------ Subject: vmscan: page_check_references(): check low order lumpy reclaim properly From: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx> If vmscan is under lumpy reclaim mode, it have to ignore referenced bit for making contenious free pages. but current page_check_references() doesn't. Fix it. Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx> Reviewed-by: Minchan Kim <minchan.kim@xxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Lee Schermerhorn <Lee.Schermerhorn@xxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/vmscan.c | 32 +++++++++++++++++--------------- 1 file changed, 17 insertions(+), 15 deletions(-) diff -puN mm/vmscan.c~vmscan-page_check_references-check-low-order-lumpy-reclaim-properly mm/vmscan.c --- a/mm/vmscan.c~vmscan-page_check_references-check-low-order-lumpy-reclaim-properly +++ a/mm/vmscan.c @@ -77,6 +77,8 @@ struct scan_control { int order; + int lumpy_reclaim; + /* Which cgroup do we reclaim from */ struct mem_cgroup *mem_cgroup; @@ -575,7 +577,7 @@ static enum page_references page_check_r referenced_page = TestClearPageReferenced(page); /* Lumpy reclaim - ignore references */ - if (sc->order > PAGE_ALLOC_COSTLY_ORDER) + if (sc->lumpy_reclaim) return PAGEREF_RECLAIM; /* @@ -1125,7 +1127,6 @@ static unsigned long shrink_inactive_lis unsigned long nr_scanned = 0; unsigned long nr_reclaimed = 0; struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(zone, sc); - int lumpy_reclaim = 0; while (unlikely(too_many_isolated(zone, file, sc))) { congestion_wait(BLK_RW_ASYNC, HZ/10); @@ -1135,17 +1136,6 @@ static unsigned long shrink_inactive_lis return SWAP_CLUSTER_MAX; } - /* - * If we need a large contiguous chunk of memory, or have - * trouble getting a small set of contiguous pages, we - * will reclaim both active and inactive pages. - * - * We use the same threshold as pageout congestion_wait below. - */ - if (sc->order > PAGE_ALLOC_COSTLY_ORDER) - lumpy_reclaim = 1; - else if (sc->order && priority < DEF_PRIORITY - 2) - lumpy_reclaim = 1; pagevec_init(&pvec, 1); @@ -1158,7 +1148,7 @@ static unsigned long shrink_inactive_lis unsigned long nr_freed; unsigned long nr_active; unsigned int count[NR_LRU_LISTS] = { 0, }; - int mode = lumpy_reclaim ? ISOLATE_BOTH : ISOLATE_INACTIVE; + int mode = sc->lumpy_reclaim ? ISOLATE_BOTH : ISOLATE_INACTIVE; unsigned long nr_anon; unsigned long nr_file; @@ -1211,7 +1201,7 @@ static unsigned long shrink_inactive_lis * but that should be acceptable to the caller */ if (nr_freed < nr_taken && !current_is_kswapd() && - lumpy_reclaim) { + sc->lumpy_reclaim) { congestion_wait(BLK_RW_ASYNC, HZ/10); /* @@ -1653,6 +1643,18 @@ static void shrink_zone(int priority, st get_scan_count(zone, sc, nr, priority); + /* + * If we need a large contiguous chunk of memory, or have + * trouble getting a small set of contiguous pages, we + * will reclaim both active and inactive pages. + */ + if (sc->order > PAGE_ALLOC_COSTLY_ORDER) + sc->lumpy_reclaim = 1; + else if (sc->order && priority < DEF_PRIORITY - 2) + sc->lumpy_reclaim = 1; + else + sc->lumpy_reclaim = 0; + while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] || nr[LRU_INACTIVE_FILE]) { for_each_evictable_lru(l) { _ Patches currently in -mm which might be from kosaki.motohiro@xxxxxxxxxxxxxx are linux-next.patch rmap-add-exclusively-owned-pages-to-the-newest-anon_vma.patch page-allocator-reduce-fragmentation-in-buddy-allocator-by-adding-buddies-that-are-merging-to-the-tail-of-the-free-lists.patch mm-remove-return-value-of-putback_lru_pages.patch mempolicy-remove-redundant-code.patch oom-filter-tasks-not-sharing-the-same-cpuset.patch oom-sacrifice-child-with-highest-badness-score-for-parent.patch oom-select-task-from-tasklist-for-mempolicy-ooms.patch oom-remove-special-handling-for-pagefault-ooms.patch oom-badness-heuristic-rewrite.patch oom-deprecate-oom_adj-tunable.patch oom-replace-sysctls-with-quick-mode.patch oom-avoid-oom-killer-for-lowmem-allocations.patch oom-remove-unnecessary-code-and-cleanup.patch oom-default-to-killing-current-for-pagefault-ooms.patch oom-avoid-race-for-oom-killed-tasks-detaching-mm-prior-to-exit.patch oom-hold-tasklist_lock-when-dumping-tasks.patch oom-give-current-access-to-memory-reserves-if-it-has-been-killed.patch oom-avoid-sending-exiting-tasks-a-sigkill.patch oom-clean-up-oom_kill_task.patch oom-clean-up-oom_badness.patch mempolicy-dont-call-mpol_set_nodemask-when-no_context.patch mempolicy-lose-unnecessary-loop-variable-in-mpol_parse_str.patch mempolicy-rename-policy_types-and-cleanup-initialization.patch mempolicy-factor-mpol_shared_policy_init-return-paths.patch mempolicy-document-cpuset-interaction-with-tmpfs-mpol-mount-option.patch mm-migration-take-a-reference-to-the-anon_vma-before-migrating.patch mm-migration-do-not-try-to-migrate-unmapped-anonymous-pages.patch mm-share-the-anon_vma-ref-counts-between-ksm-and-page-migration.patch mm-allow-config_migration-to-be-set-without-config_numa-or-memory-hot-remove.patch mm-allow-config_migration-to-be-set-without-config_numa-or-memory-hot-remove-fix.patch mm-export-unusable-free-space-index-via-proc-unusable_index.patch mm-export-unusable-free-space-index-via-proc-unusable_index-fix.patch mm-export-unusable-free-space-index-via-proc-unusable_index-fix-fix-2.patch mm-export-fragmentation-index-via-proc-extfrag_index.patch mm-export-fragmentation-index-via-proc-extfrag_index-fix.patch mm-move-definition-for-lru-isolation-modes-to-a-header.patch mm-compaction-memory-compaction-core.patch mm-compaction-memory-compaction-core-fix.patch mm-compaction-add-proc-trigger-for-memory-compaction.patch mm-compaction-add-proc-trigger-for-memory-compaction-fix.patch mm-compaction-add-proc-trigger-for-memory-compaction-fix-fix.patch mm-compaction-add-sys-trigger-for-per-node-memory-compaction.patch mm-compaction-direct-compact-when-a-high-order-allocation-fails.patch mm-compaction-direct-compact-when-a-high-order-allocation-fails-reject-fix.patch mm-compaction-add-a-tunable-that-decides-when-memory-should-be-compacted-and-when-it-should-be-reclaimed.patch mm-migration-allow-the-migration-of-pageswapcache-pages.patch mm-migration-allow-the-migration-of-pageswapcache-pages-fix.patch mm-compaction-do-not-display-compaction-related-stats-when-config_compaction.patch mm-compaction-do-not-display-compaction-related-stats-when-config_compaction-fix.patch mm-compaction-do-not-display-compaction-related-stats-when-config_compaction-fix-fix-2.patch mm-compaction-do-not-display-compaction-related-stats-when-config_co-mpaction-reject-fixpatch-added-to-mm-tree.patch vmscan-prevent-get_scan_ratio-rounding-errors.patch vmscan-page_check_references-check-low-order-lumpy-reclaim-properly.patch proc-cleanup-remove-unused-assignments.patch reiser4.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html