+ mm-tlbbatch-introduce-arch_tlbbatch_should_defer.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm/tlbbatch: introduce arch_tlbbatch_should_defer()
has been added to the -mm mm-unstable branch.  Its filename is
     mm-tlbbatch-introduce-arch_tlbbatch_should_defer.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-tlbbatch-introduce-arch_tlbbatch_should_defer.patch

This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Anshuman Khandual <khandual@xxxxxxxxxxxxxxxxxx>
Subject: mm/tlbbatch: introduce arch_tlbbatch_should_defer()
Date: Mon, 17 Jul 2023 21:10:01 +0800

Patch series "arm64: support batched/deferred tlb shootdown during page
reclamation/migration", v11.

Though ARM64 has the hardware to do tlb shootdown, the hardware
broadcasting is not free.  A simplest micro benchmark shows even on
snapdragon 888 with only 8 cores, the overhead for ptep_clear_flush is
huge even for paging out one page mapped by only one process: 5.36% a.out
[kernel.kallsyms] [k] ptep_clear_flush

While pages are mapped by multiple processes or HW has more CPUs, the cost
should become even higher due to the bad scalability of tlb shootdown. 
The same benchmark can result in 16.99% CPU consumption on ARM64 server
with around 100 cores according to the test on patch 4/4.

This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by
1. only send tlbi instructions in the first stage -
	arch_tlbbatch_add_mm()
2. wait for the completion of tlbi by dsb while doing tlbbatch
	sync in arch_tlbbatch_flush()

Testing on snapdragon shows the overhead of ptep_clear_flush is removed by
the patchset.  The micro benchmark becomes 5% faster even for one page
mapped by single process on snapdragon 888.

Since BATCHED_UNMAP_TLB_FLUSH is implemented only on x86, the patchset
does some renaming/extension for the current implementation first (Patch
1-3), then add the support on arm64 (Patch 4).
		

This patch (of 4):

The entire scheme of deferred TLB flush in reclaim path rests on the fact
that the cost to refill TLB entries is less than flushing out individual
entries by sending IPI to remote CPUs.  But architecture can have
different ways to evaluate that.  Hence apart from checking
TTU_BATCH_FLUSH in the TTU flags, rest of the decision should be
architecture specific.

[yangyicong@xxxxxxxxxxxxx: rebase and fix incorrect return value type]
Link: https://lkml.kernel.org/r/20230717131004.12662-1-yangyicong@xxxxxxxxxx
Link: https://lkml.kernel.org/r/20230717131004.12662-2-yangyicong@xxxxxxxxxx
Signed-off-by: Anshuman Khandual <khandual@xxxxxxxxxxxxxxxxxx>
[https://lore.kernel.org/linuxppc-dev/20171101101735.2318-2-khandual@xxxxxxxxxxxxxxxxxx/]
Signed-off-by: Yicong Yang <yangyicong@xxxxxxxxxxxxx>
Reviewed-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
Reviewed-by: Anshuman Khandual <anshuman.khandual@xxxxxxx>
Reviewed-by: Barry Song <baohua@xxxxxxxxxx>
Reviewed-by: Xin Hao <xhao@xxxxxxxxxxxxxxxxx>
Tested-by: Punit Agrawal <punit.agrawal@xxxxxxxxxxxxx>
Cc: Arnd Bergmann <arnd@xxxxxxxx>
Cc: Catalin Marinas <catalin.marinas@xxxxxxx>
Cc: Darren Hart <darren@xxxxxxxxxxxxxxxxxxxxxx>
Cc: Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx>
Cc: Jonathan Corbet <corbet@xxxxxxx>
Cc: lipeifeng <lipeifeng@xxxxxxxx>
Cc: Mark Rutland <mark.rutland@xxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Ryan Roberts <ryan.roberts@xxxxxxx>
Cc: Steven Miao <realmz6@xxxxxxxxx>
Cc: Will Deacon <will@xxxxxxxxxx>
Cc: Zeng Tao <prime.zeng@xxxxxxxxxxxxx>
Cc: Barry Song <v-songbaohua@xxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxx>
Cc: Nadav Amit <namit@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 arch/x86/include/asm/tlbflush.h |   12 ++++++++++++
 mm/rmap.c                       |    9 +--------
 2 files changed, 13 insertions(+), 8 deletions(-)

--- a/arch/x86/include/asm/tlbflush.h~mm-tlbbatch-introduce-arch_tlbbatch_should_defer
+++ a/arch/x86/include/asm/tlbflush.h
@@ -253,6 +253,18 @@ static inline void flush_tlb_page(struct
 	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
 }
 
+static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
+{
+	bool should_defer = false;
+
+	/* If remote CPUs need to be flushed then defer batch the flush */
+	if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids)
+		should_defer = true;
+	put_cpu();
+
+	return should_defer;
+}
+
 static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
 {
 	/*
--- a/mm/rmap.c~mm-tlbbatch-introduce-arch_tlbbatch_should_defer
+++ a/mm/rmap.c
@@ -688,17 +688,10 @@ retry:
  */
 static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags)
 {
-	bool should_defer = false;
-
 	if (!(flags & TTU_BATCH_FLUSH))
 		return false;
 
-	/* If remote CPUs need to be flushed then defer batch the flush */
-	if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids)
-		should_defer = true;
-	put_cpu();
-
-	return should_defer;
+	return arch_tlbbatch_should_defer(mm);
 }
 
 /*
_

Patches currently in -mm which might be from khandual@xxxxxxxxxxxxxxxxxx are

mm-tlbbatch-introduce-arch_tlbbatch_should_defer.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux