+ mm-page_alloc-remove-pcppage-migratetype-caching.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm: page_alloc: remove pcppage migratetype caching
has been added to the -mm mm-unstable branch.  Its filename is
     mm-page_alloc-remove-pcppage-migratetype-caching.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-page_alloc-remove-pcppage-migratetype-caching.patch

This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Johannes Weiner <hannes@xxxxxxxxxxx>
Subject: mm: page_alloc: remove pcppage migratetype caching
Date: Mon, 11 Sep 2023 15:41:42 -0400

Patch series "mm: page_alloc: freelist migratetype hygiene", v2.

This is a breakout series from the huge page allocator patches[1].

While testing and benchmarking the series incrementally, as per reviewer
request, it became apparent that there are several sources of freelist
migratetype violations that later patches in the series hid.

Those violations occur when pages of one migratetype end up on the
freelists of another type.  This encourages incompatible page mixing down
the line, where allocation requests ask for one migrate type, but receives
pages of another.  This defeats the mobility grouping.

The series addresses those causes.  The last patch adds type checks on all
freelist movements to rule out any violations.  I used these checks to
identify the violations fixed up in the preceding patches.

The series is a breakout, but has merit on its own: Less type mixing means
improved grouping, means less work for compaction, means higher THP
success rate and lower allocation latencies.  The results can be seen in a
mixed workload that stresses the machine with a kernel build job while
periodically attempting to allocate batches of THP.  The data is averaged
over 50 consecutive defconfig builds:

                                                        VANILLA      PATCHED-CLEANLISTS
Hugealloc Time median                     14642.00 (    +0.00%)   10506.00 (   -28.25%)
Hugealloc Time min                         4820.00 (    +0.00%)    4783.00 (    -0.77%)
Hugealloc Time max                      6786868.00 (    +0.00%) 6556624.00 (    -3.39%)
Kbuild Real time                            240.03 (    +0.00%)     241.45 (    +0.59%)
Kbuild User time                           1195.49 (    +0.00%)    1195.69 (    +0.02%)
Kbuild System time                           96.44 (    +0.00%)      97.03 (    +0.61%)
THP fault alloc                           11490.00 (    +0.00%)   11802.30 (    +2.72%)
THP fault fallback                          782.62 (    +0.00%)     478.88 (   -38.76%)
THP fault fail rate %                         6.38 (    +0.00%)       3.90 (   -33.52%)
Direct compact stall                        297.70 (    +0.00%)     224.56 (   -24.49%)
Direct compact fail                         265.98 (    +0.00%)     191.56 (   -27.87%)
Direct compact success                       31.72 (    +0.00%)      33.00 (    +3.91%)
Direct compact success rate %                13.11 (    +0.00%)      17.26 (   +29.43%)
Compact daemon scanned migrate          1673661.58 (    +0.00%) 1591682.18 (    -4.90%)
Compact daemon scanned free             2711252.80 (    +0.00%) 2615217.78 (    -3.54%)
Compact direct scanned migrate           384998.62 (    +0.00%)  261689.42 (   -32.03%)
Compact direct scanned free              966308.94 (    +0.00%)  667459.76 (   -30.93%)
Compact migrate scanned daemon %             80.86 (    +0.00%)      83.34 (    +3.02%)
Compact free scanned daemon %                74.41 (    +0.00%)      78.26 (    +5.10%)
Alloc stall                                 338.06 (    +0.00%)     440.72 (   +30.28%)
Pages kswapd scanned                    1356339.42 (    +0.00%) 1402313.42 (    +3.39%)
Pages kswapd reclaimed                   581309.08 (    +0.00%)  587956.82 (    +1.14%)
Pages direct scanned                      56384.18 (    +0.00%)  141095.04 (  +150.24%)
Pages direct reclaimed                    17055.54 (    +0.00%)   22427.96 (   +31.50%)
Pages scanned kswapd %                       96.38 (    +0.00%)      93.60 (    -2.86%)
Swap out                                  41528.00 (    +0.00%)   47969.92 (   +15.51%)
Swap in                                    6541.42 (    +0.00%)    9093.30 (   +39.01%)
File refaults                            127666.50 (    +0.00%)  135766.84 (    +6.34%)


This patch (of 6):

The idea behind the cache is to save get_pageblock_migratetype() lookups
during bulk freeing.  A microbenchmark suggests this isn't helping,
though.  The pcp migratetype can get stale, which means that bulk freeing
has an extra branch to check if the pageblock was isolated while on the
pcp.

While the variance overlaps, the cache write and the branch seem to make
this a net negative.  The following test allocates and frees batches of
10,000 pages (~3x the pcp high marks to trigger flushing):

Before:
          8,668.48 msec task-clock                       #   99.735 CPUs utilized               ( +-  2.90% )
                19      context-switches                 #    4.341 /sec                        ( +-  3.24% )
                 0      cpu-migrations                   #    0.000 /sec
            17,440      page-faults                      #    3.984 K/sec                       ( +-  2.90% )
    41,758,692,473      cycles                           #    9.541 GHz                         ( +-  2.90% )
   126,201,294,231      instructions                     #    5.98  insn per cycle              ( +-  2.90% )
    25,348,098,335      branches                         #    5.791 G/sec                       ( +-  2.90% )
        33,436,921      branch-misses                    #    0.26% of all branches             ( +-  2.90% )

         0.0869148 +- 0.0000302 seconds time elapsed  ( +-  0.03% )

After:
          8,444.81 msec task-clock                       #   99.726 CPUs utilized               ( +-  2.90% )
                22      context-switches                 #    5.160 /sec                        ( +-  3.23% )
                 0      cpu-migrations                   #    0.000 /sec
            17,443      page-faults                      #    4.091 K/sec                       ( +-  2.90% )
    40,616,738,355      cycles                           #    9.527 GHz                         ( +-  2.90% )
   126,383,351,792      instructions                     #    6.16  insn per cycle              ( +-  2.90% )
    25,224,985,153      branches                         #    5.917 G/sec                       ( +-  2.90% )
        32,236,793      branch-misses                    #    0.25% of all branches             ( +-  2.90% )

         0.0846799 +- 0.0000412 seconds time elapsed  ( +-  0.05% )

A side effect is that this also ensures that pages whose pageblock gets
stolen while on the pcplist end up on the right freelist and we don't
perform potentially type-incompatible buddy merges (or skip merges when we
shouldn't), whis is likely beneficial to long-term fragmentation
management, although the effects would be harder to measure.  Settle for
simpler and faster code as justification here.

Link: https://lkml.kernel.org/r/20230911195023.247694-1-hannes@xxxxxxxxxxx
Link: https://lkml.kernel.org/r/20230911195023.247694-2-hannes@xxxxxxxxxxx
Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx>
Cc: Vlastimil Babka <vbabka@xxxxxxx>
Cc: Zi Yan <ziy@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/page_alloc.c |   61 ++++++++++------------------------------------
 1 file changed, 14 insertions(+), 47 deletions(-)

--- a/mm/page_alloc.c~mm-page_alloc-remove-pcppage-migratetype-caching
+++ a/mm/page_alloc.c
@@ -204,24 +204,6 @@ EXPORT_SYMBOL(node_states);
 
 gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK;
 
-/*
- * A cached value of the page's pageblock's migratetype, used when the page is
- * put on a pcplist. Used to avoid the pageblock migratetype lookup when
- * freeing from pcplists in most cases, at the cost of possibly becoming stale.
- * Also the migratetype set in the page does not necessarily match the pcplist
- * index, e.g. page might have MIGRATE_CMA set but be on a pcplist with any
- * other index - this ensures that it will be put on the correct CMA freelist.
- */
-static inline int get_pcppage_migratetype(struct page *page)
-{
-	return page->index;
-}
-
-static inline void set_pcppage_migratetype(struct page *page, int migratetype)
-{
-	page->index = migratetype;
-}
-
 #ifdef CONFIG_HUGETLB_PAGE_SIZE_VARIABLE
 unsigned int pageblock_order __read_mostly;
 #endif
@@ -1180,7 +1162,6 @@ static void free_pcppages_bulk(struct zo
 {
 	unsigned long flags;
 	unsigned int order;
-	bool isolated_pageblocks;
 	struct page *page;
 
 	/*
@@ -1193,7 +1174,6 @@ static void free_pcppages_bulk(struct zo
 	pindex = pindex - 1;
 
 	spin_lock_irqsave(&zone->lock, flags);
-	isolated_pageblocks = has_isolate_pageblock(zone);
 
 	while (count > 0) {
 		struct list_head *list;
@@ -1209,10 +1189,12 @@ static void free_pcppages_bulk(struct zo
 		order = pindex_to_order(pindex);
 		nr_pages = 1 << order;
 		do {
+			unsigned long pfn;
 			int mt;
 
 			page = list_last_entry(list, struct page, pcp_list);
-			mt = get_pcppage_migratetype(page);
+			pfn = page_to_pfn(page);
+			mt = get_pfnblock_migratetype(page, pfn);
 
 			/* must delete to avoid corrupting pcp list */
 			list_del(&page->pcp_list);
@@ -1221,11 +1203,8 @@ static void free_pcppages_bulk(struct zo
 
 			/* MIGRATE_ISOLATE page should not go to pcplists */
 			VM_BUG_ON_PAGE(is_migrate_isolate(mt), page);
-			/* Pageblock could have been isolated meanwhile */
-			if (unlikely(isolated_pageblocks))
-				mt = get_pageblock_migratetype(page);
 
-			__free_one_page(page, page_to_pfn(page), zone, order, mt, FPI_NONE);
+			__free_one_page(page, pfn, zone, order, mt, FPI_NONE);
 			trace_mm_page_pcpu_drain(page, order, mt);
 		} while (count > 0 && !list_empty(list));
 	}
@@ -1571,7 +1550,6 @@ struct page *__rmqueue_smallest(struct z
 			continue;
 		del_page_from_free_list(page, zone, current_order);
 		expand(zone, page, order, current_order, migratetype);
-		set_pcppage_migratetype(page, migratetype);
 		trace_mm_page_alloc_zone_locked(page, order, migratetype,
 				pcp_allowed_order(order) &&
 				migratetype < MIGRATE_PCPTYPES);
@@ -2175,7 +2153,7 @@ static int rmqueue_bulk(struct zone *zon
 		 * pages are ordered properly.
 		 */
 		list_add_tail(&page->pcp_list, list);
-		if (is_migrate_cma(get_pcppage_migratetype(page)))
+		if (is_migrate_cma(get_pageblock_migratetype(page)))
 			__mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
 					      -(1 << order));
 	}
@@ -2334,19 +2312,6 @@ void drain_all_pages(struct zone *zone)
 	__drain_all_pages(zone, false);
 }
 
-static bool free_unref_page_prepare(struct page *page, unsigned long pfn,
-							unsigned int order)
-{
-	int migratetype;
-
-	if (!free_pages_prepare(page, order, FPI_NONE))
-		return false;
-
-	migratetype = get_pfnblock_migratetype(page, pfn);
-	set_pcppage_migratetype(page, migratetype);
-	return true;
-}
-
 static int nr_pcp_free(struct per_cpu_pages *pcp, int high, bool free_high)
 {
 	int min_nr_free, max_nr_free;
@@ -2432,7 +2397,7 @@ void free_unref_page(struct page *page,
 	unsigned long pfn = page_to_pfn(page);
 	int migratetype, pcpmigratetype;
 
-	if (!free_unref_page_prepare(page, pfn, order))
+	if (!free_pages_prepare(page, order, FPI_NONE))
 		return;
 
 	/*
@@ -2442,7 +2407,7 @@ void free_unref_page(struct page *page,
 	 * get those areas back if necessary. Otherwise, we may have to free
 	 * excessively into the page allocator
 	 */
-	migratetype = pcpmigratetype = get_pcppage_migratetype(page);
+	migratetype = pcpmigratetype = get_pfnblock_migratetype(page, pfn);
 	if (unlikely(migratetype >= MIGRATE_PCPTYPES)) {
 		if (unlikely(is_migrate_isolate(migratetype))) {
 			free_one_page(page_zone(page), page, pfn, order, migratetype, FPI_NONE);
@@ -2484,7 +2449,8 @@ void free_unref_page_list(struct list_he
 	/* Prepare pages for freeing */
 	list_for_each_entry_safe(page, next, list, lru) {
 		unsigned long pfn = page_to_pfn(page);
-		if (!free_unref_page_prepare(page, pfn, 0)) {
+
+		if (!free_pages_prepare(page, 0, FPI_NONE)) {
 			list_del(&page->lru);
 			continue;
 		}
@@ -2493,7 +2459,7 @@ void free_unref_page_list(struct list_he
 		 * Free isolated pages directly to the allocator, see
 		 * comment in free_unref_page.
 		 */
-		migratetype = get_pcppage_migratetype(page);
+		migratetype = get_pfnblock_migratetype(page, pfn);
 		if (unlikely(is_migrate_isolate(migratetype))) {
 			list_del(&page->lru);
 			free_one_page(page_zone(page), page, pfn, 0, migratetype, FPI_NONE);
@@ -2502,10 +2468,11 @@ void free_unref_page_list(struct list_he
 	}
 
 	list_for_each_entry_safe(page, next, list, lru) {
+		unsigned long pfn = page_to_pfn(page);
 		struct zone *zone = page_zone(page);
 
 		list_del(&page->lru);
-		migratetype = get_pcppage_migratetype(page);
+		migratetype = get_pfnblock_migratetype(page, pfn);
 
 		/*
 		 * Either different zone requiring a different pcp lock or
@@ -2528,7 +2495,7 @@ void free_unref_page_list(struct list_he
 			pcp = pcp_spin_trylock(zone->per_cpu_pageset);
 			if (unlikely(!pcp)) {
 				pcp_trylock_finish(UP_flags);
-				free_one_page(zone, page, page_to_pfn(page),
+				free_one_page(zone, page, pfn,
 					      0, migratetype, FPI_NONE);
 				locked_zone = NULL;
 				continue;
@@ -2697,7 +2664,7 @@ struct page *rmqueue_buddy(struct zone *
 			}
 		}
 		__mod_zone_freepage_state(zone, -(1 << order),
-					  get_pcppage_migratetype(page));
+					  get_pageblock_migratetype(page));
 		spin_unlock_irqrestore(&zone->lock, flags);
 	} while (check_new_pages(page, order));
 
_

Patches currently in -mm which might be from hannes@xxxxxxxxxxx are

mm-page_alloc-fix-cma-and-highatomic-landing-on-the-wrong-buddy-list.patch
mm-page_alloc-remove-pcppage-migratetype-caching.patch
mm-page_alloc-fix-up-block-types-when-merging-compatible-blocks.patch
mm-page_alloc-move-free-pages-when-converting-block-during-isolation.patch
mm-page_alloc-fix-move_freepages_block-range-error.patch
mm-page_alloc-fix-freelist-movement-during-block-conversion.patch
mm-page_alloc-consolidate-free-page-accounting.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux