On Mon, Aug 04, 2014 at 10:55:16AM +0200, Vlastimil Babka wrote: > isolate_migratepages_range() is the main function of the compaction scanner, > called either on a single pageblock by isolate_migratepages() during regular > compaction, or on an arbitrary range by CMA's __alloc_contig_migrate_range(). > It currently perfoms two pageblock-wide compaction suitability checks, and > because of the CMA callpath, it tracks if it crossed a pageblock boundary in > order to repeat those checks. > > However, closer inspection shows that those checks are always true for CMA: > - isolation_suitable() is true because CMA sets cc->ignore_skip_hint to true > - migrate_async_suitable() check is skipped because CMA uses sync compaction > > We can therefore move the compaction-specific checks to isolate_migratepages() > and simplify isolate_migratepages_range(). Furthermore, we can mimic the > freepage scanner family of functions, which has isolate_freepages_block() > function called both by compaction from isolate_freepages() and by CMA from > isolate_freepages_range(), where each use-case adds own specific glue code. > This allows further code simplification. > > Thus, we rename isolate_migratepages_range() to isolate_migratepages_block() > and limit its functionality to a single pageblock (or its subset). For CMA, > a new different isolate_migratepages_range() is created as a CMA-specific > wrapper for the _block() function. The checks specific to compaction are moved > to isolate_migratepages(). As part of the unification of these two families of > functions, we remove the redundant zone parameter where applicable, since zone > pointer is already passed in cc->zone. > > Furthermore, going back to compact_zone() and compact_finished() when pageblock > is found unsuitable (now by isolate_migratepages()) is wasteful - the checks > are meant to skip pageblocks quickly. The patch therefore also introduces a > simple loop into isolate_migratepages() so that it does not return immediately > on failed pageblock checks, but keeps going until isolate_migratepages_range() > gets called once. Similarily to isolate_freepages(), the function periodically > checks if it needs to reschedule or abort async compaction. > > Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> > Cc: Minchan Kim <minchan@xxxxxxxxxx> > Acked-by: Mel Gorman <mgorman@xxxxxxx> > Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> > Cc: Michal Nazarewicz <mina86@xxxxxxxxxx> > Cc: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> > Cc: Christoph Lameter <cl@xxxxxxxxx> > Cc: Rik van Riel <riel@xxxxxxxxxx> > Cc: David Rientjes <rientjes@xxxxxxxxxx> > --- > mm/compaction.c | 235 +++++++++++++++++++++++++++++++++----------------------- > mm/internal.h | 4 +- > mm/page_alloc.c | 3 +- > 3 files changed, 141 insertions(+), 101 deletions(-) > Hello, This patch needs one fix. Please see below. Thanks. ---------->8------------- >From 3ba15d35c00e0d913d603d2972678bf74554ed60 Mon Sep 17 00:00:00 2001 From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Date: Mon, 29 Sep 2014 15:08:07 +0900 Subject: [PATCH] mm/compaction: fix isolated page counting bug in compaction acct_isolated() is the function to adjust isolated page count. It iterates cc->migratepages list and add NR_ISOLATED_ANON/NR_ISOLATED_FILE count according to number of anon, file pages in migratepages list, respectively. Before commit (mm, compaction: move pageblock checks up from isolate_migratepages_range()), it is called just once in isolate_migratepages_range(), but, after commit, it is called in newly introduced isolate_migratepages_block() and this isolate_migratepages_block() could be called many times in isolate_migratepages_range() so that some page could be counted more than once. This duplicate counting bug results in hang in cma_alloc(), because too_many_isolated() returns true continually. This patch fixes this bug by moving acct_isolated() into upper layer function, isolate_migratepages_range() and isolate_migratepages(). After this change, isolated page would be counted only once so problem would be gone. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> --- mm/compaction.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 7d9d92e..48129b6 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -514,22 +514,19 @@ isolate_freepages_range(struct compact_control *cc, } /* Update the number of anon and file isolated pages in the zone */ -static void acct_isolated(struct zone *zone, bool locked, struct compact_control *cc) +static void acct_isolated(struct zone *zone, struct compact_control *cc) { struct page *page; unsigned int count[2] = { 0, }; + if (list_empty(&cc->migratepages)) + return; + list_for_each_entry(page, &cc->migratepages, lru) count[!!page_is_file_cache(page)]++; - /* If locked we can use the interrupt unsafe versions */ - if (locked) { - __mod_zone_page_state(zone, NR_ISOLATED_ANON, count[0]); - __mod_zone_page_state(zone, NR_ISOLATED_FILE, count[1]); - } else { - mod_zone_page_state(zone, NR_ISOLATED_ANON, count[0]); - mod_zone_page_state(zone, NR_ISOLATED_FILE, count[1]); - } + mod_zone_page_state(zone, NR_ISOLATED_ANON, count[0]); + mod_zone_page_state(zone, NR_ISOLATED_FILE, count[1]); } /* Similar to reclaim, but different enough that they don't share logic */ @@ -726,8 +723,6 @@ isolate_success: if (unlikely(low_pfn > end_pfn)) low_pfn = end_pfn; - acct_isolated(zone, locked, cc); - if (locked) spin_unlock_irqrestore(&zone->lru_lock, flags); @@ -789,6 +784,7 @@ isolate_migratepages_range(struct compact_control *cc, unsigned long start_pfn, break; } } + acct_isolated(cc->zone, cc); return pfn; } @@ -1028,6 +1024,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone, break; } + acct_isolated(zone, cc); /* Record where migration scanner will be restarted */ cc->migrate_pfn = low_pfn; -- 1.7.9.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>