Re: [linux-next:master] [mm] 5df397dec7: will-it-scale.per_thread_ops -53.3% regression

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Dec 6, 2022 at 9:41 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote:
>
> I have tested the patch, it does fix the regression

Thanks.

Andrew, here's the patch with a proper commit message. Note that my
commit message contains the SHA1 of the original patch both in the
explanation and in a "Fixes:" line, which I think is fine for the
"mm-stable" branch that the original patch is in.

But if you end up rebasing that mm-stable branch, then I'd ask you to
either remove/update those commit hashes, or just fold this fix into
the original one. Ok?

             Linus
From d6605e27c9d9fd57f6772de12f5406559fced6b7 Mon Sep 17 00:00:00 2001
From: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Date: Tue, 6 Dec 2022 11:15:09 -0800
Subject: [PATCH] mm: mmu_gather: allow more than one batch of delayed rmaps

Commit 5df397dec7c4 ("mm: delay page_remove_rmap() until after the TLB
has been flushed") limited the page batching for the mmu gather
operation when a dirty shared page needed to delay rmap removal until
after the TLB had been flushed.

It did so because it needs to walk that array of pages while still
holding the page table lock, and our mmu_gather infrastructure allows
for batching quite a lot of pages.  We may have thousands on pages
queued up for freeing, and we wanted to walk only the last batch if we
then added a dirty page to the queue.

However, when I limited it to one batch, I didn't think of the
degenerate case of the special first batch that is embedded on-stack in
the mmu_gather structure (called "local") and that only has eight
entries.

So with the right pattern, that "limit delayed rmap to just one batch"
will trigger over and over in that first small batch, and we'll waste a
lot of time flushing TLB's every eight pages.

And those right patterns are trivially triggered by just having a shared
mappings with lots of adjacent dirty pages.  Like the 'page_fault3'
subtest of the 'will-it-scale' benchmark, that just maps a shared area,
dirties all pages, and unmaps it.  Rinse and repeat.

We still want to limit the batching, but to fix this (easily triggered)
degenerate case, just expand the "only one batch" logic to instead be
"only one batch that isn't the special first on-stack ('local') batch".

That way, when we need to flush the delayed rmaps, we can still limit
our walk to just the last batch - and that first small one.

Fixes: 5df397dec7c4 ("mm: delay page_remove_rmap() until after the TLB has been flushed")
Reported-by: kernel test robot <yujie.liu@xxxxxxxxx>
Link: https://lore.kernel.org/oe-lkp/202212051534.852804af-yujie.liu@xxxxxxxxx
Tested-by: Huang, Ying <ying.huang@xxxxxxxxx>
Tested-by: Hugh Dickins <hughd@xxxxxxxxxx>
Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
---
 mm/mmu_gather.c | 36 ++++++++++++++++++++----------------
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 8247553a69c2..2b93cf6ac9ae 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -19,8 +19,8 @@ static bool tlb_next_batch(struct mmu_gather *tlb)
 {
 	struct mmu_gather_batch *batch;
 
-	/* No more batching if we have delayed rmaps pending */
-	if (tlb->delayed_rmap)
+	/* Limit batching if we have delayed rmaps pending */
+	if (tlb->delayed_rmap && tlb->active != &tlb->local)
 		return false;
 
 	batch = tlb->active;
@@ -48,31 +48,35 @@ static bool tlb_next_batch(struct mmu_gather *tlb)
 }
 
 #ifdef CONFIG_SMP
+static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct vm_area_struct *vma)
+{
+	for (int i = 0; i < batch->nr; i++) {
+		struct encoded_page *enc = batch->encoded_pages[i];
+
+		if (encoded_page_flags(enc)) {
+			struct page *page = encoded_page_ptr(enc);
+			page_remove_rmap(page, vma, false);
+		}
+	}
+}
+
 /**
  * tlb_flush_rmaps - do pending rmap removals after we have flushed the TLB
  * @tlb: the current mmu_gather
  *
  * Note that because of how tlb_next_batch() above works, we will
- * never start new batches with pending delayed rmaps, so we only
- * need to walk through the current active batch.
+ * never start multiple new batches with pending delayed rmaps, so
+ * we only need to walk through the current active batch and the
+ * original local one.
  */
 void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma)
 {
-	struct mmu_gather_batch *batch;
-
 	if (!tlb->delayed_rmap)
 		return;
 
-	batch = tlb->active;
-	for (int i = 0; i < batch->nr; i++) {
-		struct encoded_page *enc = batch->encoded_pages[i];
-
-		if (encoded_page_flags(enc)) {
-			struct page *page = encoded_page_ptr(enc);
-			page_remove_rmap(page, vma, false);
-		}
-	}
-
+	tlb_flush_rmap_batch(&tlb->local, vma);
+	if (tlb->active != &tlb->local)
+		tlb_flush_rmap_batch(tlb->active, vma);
 	tlb->delayed_rmap = 0;
 }
 #endif
-- 
2.38.1.284.gfd9468d787


[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux