This is a note to let you know that I've just added the patch titled mm/compaction: avoid rescanning pageblocks in isolate_freepages to the 3.14-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: mm-compaction-avoid-rescanning-pageblocks-in-isolate_freepages.patch and it can be found in the queue-3.14 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From e9ade569910a82614ff5f2c2cea2b65a8d785da4 Mon Sep 17 00:00:00 2001 From: Vlastimil Babka <vbabka@xxxxxxx> Date: Wed, 4 Jun 2014 16:08:34 -0700 Subject: mm/compaction: avoid rescanning pageblocks in isolate_freepages From: Vlastimil Babka <vbabka@xxxxxxx> commit e9ade569910a82614ff5f2c2cea2b65a8d785da4 upstream. The compaction free scanner in isolate_freepages() currently remembers PFN of the highest pageblock where it successfully isolates, to be used as the starting pageblock for the next invocation. The rationale behind this is that page migration might return free pages to the allocator when migration fails and we don't want to skip them if the compaction continues. Since migration now returns free pages back to compaction code where they can be reused, this is no longer a concern. This patch changes isolate_freepages() so that the PFN for restarting is updated with each pageblock where isolation is attempted. Using stress-highalloc from mmtests, this resulted in 10% reduction of the pages scanned by the free scanner. Note that the somewhat similar functionality that records highest successful pageblock in zone->compact_cached_free_pfn, remains unchanged. This cache is used when the whole compaction is restarted, not for multiple invocations of the free scanner during single compaction. Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@xxxxxxxxxxx> Acked-by: Michal Nazarewicz <mina86@xxxxxxxxxx> Reviewed-by: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Acked-by: David Rientjes <rientjes@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Mel Gorman <mgorman@xxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- mm/compaction.c | 22 +++++++--------------- 1 file changed, 7 insertions(+), 15 deletions(-) --- a/mm/compaction.c +++ b/mm/compaction.c @@ -688,7 +688,6 @@ static void isolate_freepages(struct zon unsigned long block_start_pfn; /* start of current pageblock */ unsigned long block_end_pfn; /* end of current pageblock */ unsigned long low_pfn; /* lowest pfn scanner is able to scan */ - unsigned long next_free_pfn; /* start pfn for scaning at next round */ int nr_freepages = cc->nr_freepages; struct list_head *freelist = &cc->freepages; @@ -709,12 +708,6 @@ static void isolate_freepages(struct zon low_pfn = ALIGN(cc->migrate_pfn + 1, pageblock_nr_pages); /* - * If no pages are isolated, the block_start_pfn < low_pfn check - * will kick in. - */ - next_free_pfn = 0; - - /* * Isolate free pages until enough are available to migrate the * pages on cc->migratepages. We stop searching if the migrate * and free page scanners meet or enough free pages are isolated. @@ -754,19 +747,19 @@ static void isolate_freepages(struct zon continue; /* Found a block suitable for isolating free pages from */ + cc->free_pfn = block_start_pfn; isolated = isolate_freepages_block(cc, block_start_pfn, block_end_pfn, freelist, false); nr_freepages += isolated; /* - * Record the highest PFN we isolated pages from. When next - * looking for free pages, the search will restart here as - * page migration may have returned some pages to the allocator + * Set a flag that we successfully isolated in this pageblock. + * In the next loop iteration, zone->compact_cached_free_pfn + * will not be updated and thus it will effectively contain the + * highest pageblock we isolated pages from. */ - if (isolated && next_free_pfn == 0) { + if (isolated) cc->finished_update_free = true; - next_free_pfn = block_start_pfn; - } } /* split_free_page does not map the pages */ @@ -777,9 +770,8 @@ static void isolate_freepages(struct zon * so that compact_finished() may detect this */ if (block_start_pfn < low_pfn) - next_free_pfn = cc->migrate_pfn; + cc->free_pfn = cc->migrate_pfn; - cc->free_pfn = next_free_pfn; cc->nr_freepages = nr_freepages; } Patches currently in stable-queue which might be from vbabka@xxxxxxx are queue-3.14/mm-filemap-move-radix-tree-hole-searching-here.patch queue-3.14/mm-compaction-terminate-async-compaction-when-rescheduling.patch queue-3.14/lib-radix-tree-add-radix_tree_delete_item.patch queue-3.14/mm-compaction-properly-signal-and-act-upon-lock-and-need_sched-contention.patch queue-3.14/mm-compaction-avoid-rescanning-pageblocks-in-isolate_freepages.patch queue-3.14/mm-migration-add-destination-page-freeing-callback.patch queue-3.14/mm-fs-prepare-for-non-page-entries-in-page-cache-radix-trees.patch queue-3.14/mm-compaction-add-per-zone-migration-pfn-cache-for-async-compaction.patch queue-3.14/mm-compaction-embed-migration-mode-in-compact_control.patch queue-3.14/mm-shmem-save-one-radix-tree-lookup-when-truncating-swapped-pages.patch queue-3.14/mm-page_alloc-prevent-migrate_reserve-pages-from-being-misplaced.patch queue-3.14/mm-compaction-cleanup-isolate_freepages.patch queue-3.14/mm-compaction-return-failed-migration-target-pages-back-to-freelist.patch queue-3.14/mm-compaction-do-not-count-migratepages-when-unnecessary.patch queue-3.14/mm-compaction-clean-up-unused-code-lines.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html