+ mm-compaction-round-robin-the-order-while-searching-the-free-lists-for-a-target.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm, compaction: round-robin the order while searching the free lists for a target
has been added to the -mm tree.  Its filename is
     mm-compaction-round-robin-the-order-while-searching-the-free-lists-for-a-target.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/mm-compaction-round-robin-the-order-while-searching-the-free-lists-for-a-target.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/mm-compaction-round-robin-the-order-while-searching-the-free-lists-for-a-target.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
Subject: mm, compaction: round-robin the order while searching the free lists for a target

As compaction proceeds and creates high-order blocks, the free list search
gets less efficient as the larger blocks are used as compaction targets. 
Eventually, the larger blocks will be behind the migration scanner for
partially migrated pageblocks and the search fails.  This patch
round-robins what orders are searched so that larger blocks can be ignored
and find smaller blocks that can be used as migration targets.

The overall impact was small on 1-socket but it avoids corner cases where
the migration/free scanners meet prematurely or situations where many of
the pageblocks encountered by the free scanner are almost full instead of
being properly packed.  Previous testing had indicated that without this
patch there were occasional large spikes in the free scanner without this
patch.  By co-incidence, the 2-socket results showed a 54% reduction in
the free scanner but will not be universally true.

Link: http://lkml.kernel.org/r/20190104125011.16071-22-mgorman@xxxxxxxxxxxxxxxxxxx
Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
Cc: David Rientjes <rientjes@xxxxxxxxxx>
Cc: Huang Ying <ying.huang@xxxxxxxxx>
Cc: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx>
Cc: Vlastimil Babka <vbabka@xxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---


--- a/mm/compaction.c~mm-compaction-round-robin-the-order-while-searching-the-free-lists-for-a-target
+++ a/mm/compaction.c
@@ -1154,6 +1154,24 @@ fast_isolate_around(struct compact_contr
 		set_pageblock_skip(page);
 }
 
+/* Search orders in round-robin fashion */
+static int next_search_order(struct compact_control *cc, int order)
+{
+	order--;
+	if (order < 0)
+		order = cc->order - 1;
+
+	/* Search wrapped around? */
+	if (order == cc->search_order) {
+		cc->search_order--;
+		if (cc->search_order < 0)
+			cc->search_order = cc->order - 1;
+		return -1;
+	}
+
+	return order;
+}
+
 static unsigned long
 fast_isolate_freepages(struct compact_control *cc)
 {
@@ -1186,9 +1204,15 @@ fast_isolate_freepages(struct compact_co
 	if (WARN_ON_ONCE(min_pfn > low_pfn))
 		low_pfn = min_pfn;
 
-	for (order = cc->order - 1;
-	     order >= 0 && !page;
-	     order--) {
+	/*
+	 * Search starts from the last successful isolation order or the next
+	 * order to search after a previous failure
+	 */
+	cc->search_order = min_t(unsigned int, cc->order - 1, cc->search_order);
+
+	for (order = cc->search_order;
+	     !page && order >= 0;
+	     order = next_search_order(cc, order)) {
 		struct free_area *area = &cc->zone->free_area[order];
 		struct list_head *freelist;
 		struct page *freepage;
@@ -1211,6 +1235,7 @@ fast_isolate_freepages(struct compact_co
 
 			if (pfn >= low_pfn) {
 				cc->fast_search_fail = 0;
+				cc->search_order = order;
 				page = freepage;
 				break;
 			}
@@ -2146,6 +2171,7 @@ static enum compact_result compact_zone_
 		.total_migrate_scanned = 0,
 		.total_free_scanned = 0,
 		.order = order,
+		.search_order = order,
 		.gfp_mask = gfp_mask,
 		.zone = zone,
 		.mode = (prio == COMPACT_PRIO_ASYNC) ?
@@ -2377,6 +2403,7 @@ static void kcompactd_do_work(pg_data_t
 	struct zone *zone;
 	struct compact_control cc = {
 		.order = pgdat->kcompactd_max_order,
+		.search_order = pgdat->kcompactd_max_order,
 		.total_migrate_scanned = 0,
 		.total_free_scanned = 0,
 		.classzone_idx = pgdat->kcompactd_classzone_idx,
--- a/mm/internal.h~mm-compaction-round-robin-the-order-while-searching-the-free-lists-for-a-target
+++ a/mm/internal.h
@@ -192,7 +192,8 @@ struct compact_control {
 	struct zone *zone;
 	unsigned long total_migrate_scanned;
 	unsigned long total_free_scanned;
-	unsigned int fast_search_fail;	/* failures to use free list searches */
+	unsigned short fast_search_fail;/* failures to use free list searches */
+	unsigned short search_order;	/* order to start a fast search at */
 	const gfp_t gfp_mask;		/* gfp mask of a direct compactor */
 	int order;			/* order a direct compactor needs */
 	int migratetype;		/* migratetype of direct compactor */
_

Patches currently in -mm which might be from mgorman@xxxxxxxxxxxxxxxxxxx are

mm-page_alloc-do-not-wake-kswapd-with-zone-lock-held.patch
mm-compaction-shrink-compact_control.patch
mm-compaction-rearrange-compact_control.patch
mm-compaction-remove-last_migrated_pfn-from-compact_control.patch
mm-compaction-remove-unnecessary-zone-parameter-in-some-instances.patch
mm-compaction-rename-map_pages-to-split_map_pages.patch
mm-compaction-skip-pageblocks-with-reserved-pages.patch
mm-migrate-immediately-fail-migration-of-a-page-with-no-migration-handler.patch
mm-compaction-always-finish-scanning-of-a-full-pageblock.patch
mm-compaction-use-the-page-allocator-bulk-free-helper-for-lists-of-pages.patch
mm-compaction-ignore-the-fragmentation-avoidance-boost-for-isolation-and-compaction.patch
mm-compaction-use-free-lists-to-quickly-locate-a-migration-source.patch
mm-compaction-keep-migration-source-private-to-a-single-compaction-instance.patch
mm-compaction-use-free-lists-to-quickly-locate-a-migration-target.patch
mm-compaction-avoid-rescanning-the-same-pageblock-multiple-times.patch
mm-compaction-finish-pageblock-scanning-on-contention.patch
mm-compaction-check-early-for-huge-pages-encountered-by-the-migration-scanner.patch
mm-compaction-keep-cached-migration-pfns-synced-for-unusable-pageblocks.patch
mm-compaction-rework-compact_should_abort-as-compact_check_resched.patch
mm-compaction-do-not-consider-a-need-to-reschedule-as-contention.patch
mm-compaction-reduce-unnecessary-skipping-of-migration-target-scanner.patch
mm-compaction-round-robin-the-order-while-searching-the-free-lists-for-a-target.patch
mm-compaction-sample-pageblocks-for-free-pages.patch
mm-compaction-be-selective-about-what-pageblocks-to-clear-skip-hints.patch
mm-compaction-capture-a-page-under-direct-compaction.patch
mm-compaction-do-not-direct-compact-remote-memory.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux