+ mm-compaction-reduce-unnecessary-skipping-of-migration-target-scanner.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm, compaction: reduce unnecessary skipping of migration target scanner
has been added to the -mm tree.  Its filename is
     mm-compaction-reduce-unnecessary-skipping-of-migration-target-scanner.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/mm-compaction-reduce-unnecessary-skipping-of-migration-target-scanner.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/mm-compaction-reduce-unnecessary-skipping-of-migration-target-scanner.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
Subject: mm, compaction: reduce unnecessary skipping of migration target scanner

The fast isolation of pages can move the scanner faster than is necessary
depending on the contents of the free list.  This patch will only allow
the fast isolation to initialise the scanner and advance it slowly.  The
primary means of moving the scanner forward is via the linear scanner to
reduce the likelihood the migration source/target scanners meet
prematurely triggering a rescan.

                                        4.20.0                 4.20.0
                               noresched-v2r15         slowfree-v2r15
Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
Amean     fault-both-3      2736.50 (   0.00%)     2512.53 (   8.18%)
Amean     fault-both-5      4133.70 (   0.00%)     4159.43 (  -0.62%)
Amean     fault-both-7      5738.61 (   0.00%)     5950.15 (  -3.69%)
Amean     fault-both-12     9392.82 (   0.00%)     8674.38 (   7.65%)
Amean     fault-both-18    13257.15 (   0.00%)    12850.79 (   3.07%)
Amean     fault-both-24    16859.44 (   0.00%)    17242.86 (  -2.27%)
Amean     fault-both-30    16249.30 (   0.00%)    19404.18 * -19.42%*
Amean     fault-both-32    14904.71 (   0.00%)    16200.79 (  -8.70%)

The impact to latency, success rates and scan rates is marginal but
avoiding unnecessary restarts is important.  It helps later patches that
are more careful about how pageblocks are treated as earlier iterations of
those patches hit corner cases where the restarts were punishing and very
visible.

Link: http://lkml.kernel.org/r/20190104125011.16071-21-mgorman@xxxxxxxxxxxxxxxxxxx
Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
Cc: David Rientjes <rientjes@xxxxxxxxxx>
Cc: Huang Ying <ying.huang@xxxxxxxxx>
Cc: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx>
Cc: Vlastimil Babka <vbabka@xxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---


--- a/mm/compaction.c~mm-compaction-reduce-unnecessary-skipping-of-migration-target-scanner
+++ a/mm/compaction.c
@@ -324,10 +324,9 @@ static void update_cached_migrate(struct
  * future. The information is later cleared by __reset_isolation_suitable().
  */
 static void update_pageblock_skip(struct compact_control *cc,
-			struct page *page, unsigned long nr_isolated)
+			struct page *page, unsigned long pfn)
 {
 	struct zone *zone = cc->zone;
-	unsigned long pfn;
 
 	if (cc->no_set_skip_hint)
 		return;
@@ -335,13 +334,8 @@ static void update_pageblock_skip(struct
 	if (!page)
 		return;
 
-	if (nr_isolated)
-		return;
-
 	set_pageblock_skip(page);
 
-	pfn = page_to_pfn(page);
-
 	/* Update where async and sync compaction should restart */
 	if (pfn < zone->compact_cached_free_pfn)
 		zone->compact_cached_free_pfn = pfn;
@@ -359,7 +353,7 @@ static inline bool pageblock_skip_persis
 }
 
 static inline void update_pageblock_skip(struct compact_control *cc,
-			struct page *page, unsigned long nr_isolated)
+			struct page *page, unsigned long pfn)
 {
 }
 
@@ -450,7 +444,7 @@ static unsigned long isolate_freepages_b
 				bool strict)
 {
 	int nr_scanned = 0, total_isolated = 0;
-	struct page *cursor, *valid_page = NULL;
+	struct page *cursor;
 	unsigned long flags = 0;
 	bool locked = false;
 	unsigned long blockpfn = *start_pfn;
@@ -477,9 +471,6 @@ static unsigned long isolate_freepages_b
 		if (!pfn_valid_within(blockpfn))
 			goto isolate_fail;
 
-		if (!valid_page)
-			valid_page = page;
-
 		/*
 		 * For compound pages such as THP and hugetlbfs, we can save
 		 * potentially a lot of iterations if we skip them at once.
@@ -576,10 +567,6 @@ isolate_fail:
 	if (strict && blockpfn < end_pfn)
 		total_isolated = 0;
 
-	/* Update the pageblock-skip if the whole pageblock was scanned */
-	if (blockpfn == end_pfn)
-		update_pageblock_skip(cc, valid_page, total_isolated);
-
 	cc->total_free_scanned += nr_scanned;
 	if (total_isolated)
 		count_compact_events(COMPACTISOLATED, total_isolated);
@@ -1295,8 +1282,10 @@ fast_isolate_freepages(struct compact_co
 		}
 	}
 
-	if (highest && highest > cc->zone->compact_cached_free_pfn)
+	if (highest && highest >= cc->zone->compact_cached_free_pfn) {
+		highest -= pageblock_nr_pages;
 		cc->zone->compact_cached_free_pfn = highest;
+	}
 
 	cc->total_free_scanned += nr_scanned;
 	if (!page)
@@ -1376,6 +1365,10 @@ static void isolate_freepages(struct com
 		isolate_freepages_block(cc, &isolate_start_pfn, block_end_pfn,
 					freelist, false);
 
+		/* Update the skip hint if the full pageblock was scanned */
+		if (isolate_start_pfn == block_end_pfn)
+			update_pageblock_skip(cc, page, block_start_pfn);
+
 		/* Are enough freepages isolated? */
 		if (cc->nr_freepages >= cc->nr_migratepages) {
 			if (isolate_start_pfn >= block_end_pfn) {
_

Patches currently in -mm which might be from mgorman@xxxxxxxxxxxxxxxxxxx are

mm-page_alloc-do-not-wake-kswapd-with-zone-lock-held.patch
mm-compaction-shrink-compact_control.patch
mm-compaction-rearrange-compact_control.patch
mm-compaction-remove-last_migrated_pfn-from-compact_control.patch
mm-compaction-remove-unnecessary-zone-parameter-in-some-instances.patch
mm-compaction-rename-map_pages-to-split_map_pages.patch
mm-compaction-skip-pageblocks-with-reserved-pages.patch
mm-migrate-immediately-fail-migration-of-a-page-with-no-migration-handler.patch
mm-compaction-always-finish-scanning-of-a-full-pageblock.patch
mm-compaction-use-the-page-allocator-bulk-free-helper-for-lists-of-pages.patch
mm-compaction-ignore-the-fragmentation-avoidance-boost-for-isolation-and-compaction.patch
mm-compaction-use-free-lists-to-quickly-locate-a-migration-source.patch
mm-compaction-keep-migration-source-private-to-a-single-compaction-instance.patch
mm-compaction-use-free-lists-to-quickly-locate-a-migration-target.patch
mm-compaction-avoid-rescanning-the-same-pageblock-multiple-times.patch
mm-compaction-finish-pageblock-scanning-on-contention.patch
mm-compaction-check-early-for-huge-pages-encountered-by-the-migration-scanner.patch
mm-compaction-keep-cached-migration-pfns-synced-for-unusable-pageblocks.patch
mm-compaction-rework-compact_should_abort-as-compact_check_resched.patch
mm-compaction-do-not-consider-a-need-to-reschedule-as-contention.patch
mm-compaction-reduce-unnecessary-skipping-of-migration-target-scanner.patch
mm-compaction-round-robin-the-order-while-searching-the-free-lists-for-a-target.patch
mm-compaction-sample-pageblocks-for-free-pages.patch
mm-compaction-be-selective-about-what-pageblocks-to-clear-skip-hints.patch
mm-compaction-capture-a-page-under-direct-compaction.patch
mm-compaction-do-not-direct-compact-remote-memory.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux