On 1/30/2024 2:30 PM, Andrew Morton wrote:
The patch titled
Subject: mm: compaction: update the cc->nr_migratepages when allocating or freeing the freepages
has been added to the -mm mm-unstable branch. Its filename is
mm-compaction-update-the-cc-nr_migratepages-when-allocating-or-freeing-the-freepages.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-compaction-update-the-cc-nr_migratepages-when-allocating-or-freeing-the-freepages.patch
This patch will later appear in the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
Subject: mm: compaction: update the cc->nr_migratepages when allocating or freeing the freepages
Date: Mon, 22 Jan 2024 21:01:54 +0800
Currently we will use 'cc->nr_freepages >= cc->nr_migratepages' comparison
to ensure that enough freepages are isolated in isolate_freepages(), however
it just decreases the cc->nr_freepages without updating cc->nr_migratepages
in compaction_alloc(), which will waste more CPU cycles and cause too many
freepages to be isolated.
So we should also update the cc->nr_migratepages when allocating or
freeing the freepages to avoid isolating excess freepages. And I can see
fewer free pages are scanned and isolated when running thpcompact on my
Arm64 server:
k6.7 k6.7_patched
Ops Compaction pages isolated 120692036.00 118160797.00
Ops Compaction migrate scanned 131210329.00 154093268.00
Ops Compaction free scanned 1090587971.00 1080632536.00
Ops Compact scan efficiency 12.03 14.26
Moreover, I did not see an obvious latency improvements, this is likely
because isolating freepages is not the bottleneck in the thpcompact test
case.
k6.7 k6.7_patched
Amean fault-both-1 1089.76 ( 0.00%) 1080.16 * 0.88%*
Amean fault-both-3 1616.48 ( 0.00%) 1636.65 * -1.25%*
Amean fault-both-5 2266.66 ( 0.00%) 2219.20 * 2.09%*
Amean fault-both-7 2909.84 ( 0.00%) 2801.90 * 3.71%*
Amean fault-both-12 4861.26 ( 0.00%) 4733.25 * 2.63%*
Amean fault-both-18 7351.11 ( 0.00%) 6950.51 * 5.45%*
Amean fault-both-24 9059.30 ( 0.00%) 9159.99 * -1.11%*
Amean fault-both-30 10685.68 ( 0.00%) 11399.02 * -6.68%*
Link: https://lkml.kernel.org/r/0773058df022fa701b78f9a6dfe3c501a1a77351.1705928395.git.baolin.wang@xxxxxxxxxxxxxxxxx
Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
Cc: Vlastimil Babka <vbabka@xxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Hi Andrew,
Thanks for queuing this patch, but this patch should be updated as below
after the "Enable >0 order folio memory compaction" series:
diff --git a/mm/compaction.c b/mm/compaction.c
index fa9993c8a389..af3738102838 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1881,6 +1881,7 @@ static struct folio *compaction_alloc(struct folio
*src, unsigned long data)
if (order)
prep_compound_page(&dst->page, order);
cc->nr_freepages -= 1 << order;
+ cc->nr_migratepages -= 1 << order;
return page_rmappable_folio(&dst->page);
}
@@ -1903,6 +1904,7 @@ static void compaction_free(struct folio *dst,
unsigned long data)
list_add(&dst->lru, &cc->freepages[order].pages);
cc->freepages[order].nr_pages++;
cc->nr_freepages += 1 << order;
+ cc->nr_migratepages += 1 << order;
}
mm/compaction.c | 2 ++
1 file changed, 2 insertions(+)
--- a/mm/compaction.c~mm-compaction-update-the-cc-nr_migratepages-when-allocating-or-freeing-the-freepages
+++ a/mm/compaction.c
@@ -1876,6 +1876,7 @@ again:
dst = list_first_entry(&cc->freepages[order].pages, struct folio, lru);
cc->freepages[order].nr_pages--;
+ cc->nr_migratepages--;
list_del(&dst->lru);
done:
post_alloc_hook(&dst->page, order, __GFP_MOVABLE);
@@ -1904,6 +1905,7 @@ static void compaction_free(struct folio
list_add(&dst->lru, &cc->freepages[order].pages);
cc->freepages[order].nr_pages++;
cc->nr_freepages += 1 << order;
+ cc->nr_migratepages++;
}
/* possible outcome of isolate_migratepages */
_
Patches currently in -mm which might be from baolin.wang@xxxxxxxxxxxxxxxxx are
mm-compaction-limit-the-suitable-target-page-order-to-be-less-than-cc-order.patch
mm-compaction-update-the-cc-nr_migratepages-when-allocating-or-freeing-the-freepages.patch