Re: [PATCH v2 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi barry.

I do some test on Kunpeng arm64 machine use Unixbench.

The test  result as below.

One core, we can see the performance improvement above +30%.
./Run -c 1 -i 1 shell1
w/o
System Benchmarks Partial Index              BASELINE RESULT INDEX
Shell Scripts (1 concurrent)                     42.4 5481.0 1292.7
========
System Benchmarks Index Score (Partial Only)                         1292.7

w/
System Benchmarks Partial Index              BASELINE RESULT INDEX
Shell Scripts (1 concurrent)                     42.4 6974.6 1645.0
========
System Benchmarks Index Score (Partial Only)                         1645.0


But with whole cores, there have little performance degradation above -5%

./Run -c 96 -i 1 shell1
w/o
Shell Scripts (1 concurrent)                  80765.5 lpm   (60.0 s, 1 samples)
System Benchmarks Partial Index              BASELINE RESULT INDEX
Shell Scripts (1 concurrent)                     42.4 80765.5 19048.5
========
System Benchmarks Index Score (Partial Only)                        19048.5

w
Shell Scripts (1 concurrent)                  76333.6 lpm   (60.0 s, 1 samples)
System Benchmarks Partial Index              BASELINE RESULT INDEX
Shell Scripts (1 concurrent)                     42.4 76333.6 18003.2
========
System Benchmarks Index Score (Partial Only)                        18003.2

----------------------------------------------------------------------------------------------

After discuss with you, and do some changes in the patch.

ndex a52381a680db..1ecba81f1277 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -727,7 +727,11 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
        int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;

        if (pending != flushed) {
+#ifdef CONFIG_ARCH_HAS_MM_CPUMASK
                flush_tlb_mm(mm);
+#else
+               dsb(ish);
+#endif
                /*
                 * If the new TLB flushing is pending during flushing, leave
                 * mm->tlb_flush_batched as is, to avoid losing flushing.

there have a performance improvement with whole cores, above +30%

./Run -c 96 -i 1 shell1
96 CPUs in system; running 96 parallel copies of tests

Shell Scripts (1 concurrent)                 109229.0 lpm   (60.0 s, 1 samples)
System Benchmarks Partial Index              BASELINE       RESULT    INDEX
Shell Scripts (1 concurrent)                     42.4     109229.0  25761.6
                                                                   ========
System Benchmarks Index Score (Partial Only)                        25761.6


Tested-by: Xin Hao<xhao@xxxxxxxxxxxxxxxxx>

Looking forward to your next version patch.

On 7/11/22 11:46 AM, Barry Song wrote:
Though ARM64 has the hardware to do tlb shootdown, the hardware
broadcasting is not free.
A simplest micro benchmark shows even on snapdragon 888 with only
8 cores, the overhead for ptep_clear_flush is huge even for paging
out one page mapped by only one process:
5.36%  a.out    [kernel.kallsyms]  [k] ptep_clear_flush

While pages are mapped by multiple processes or HW has more CPUs,
the cost should become even higher due to the bad scalability of
tlb shootdown.

The same benchmark can result in 16.99% CPU consumption on ARM64
server with around 100 cores according to Yicong's test on patch
4/4.

This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by
1. only send tlbi instructions in the first stage -
	arch_tlbbatch_add_mm()
2. wait for the completion of tlbi by dsb while doing tlbbatch
	sync in arch_tlbbatch_flush()
My testing on snapdragon shows the overhead of ptep_clear_flush
is removed by the patchset. The micro benchmark becomes 5% faster
even for one page mapped by single process on snapdragon 888.


-v2:
1. Collected Yicong's test result on kunpeng920 ARM64 server;
2. Removed the redundant vma parameter in arch_tlbbatch_add_mm()
    according to the comments of Peter Zijlstra and Dave Hansen
3. Added ARCH_HAS_MM_CPUMASK rather than checking if mm_cpumask
    is empty according to the comments of Nadav Amit

Thanks, Yicong, Peter, Dave and Nadav for your testing or reviewing
, and comments.

-v1:
https://lore.kernel.org/lkml/20220707125242.425242-1-21cnbao@xxxxxxxxx/

Barry Song (4):
   Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't
     apply to ARM64"
   mm: rmap: Allow platforms without mm_cpumask to defer TLB flush
   mm: rmap: Extend tlbbatch APIs to fit new platforms
   arm64: support batched/deferred tlb shootdown during page reclamation

  Documentation/features/arch-support.txt       |  1 -
  .../features/vm/TLB/arch-support.txt          |  2 +-
  arch/arm/Kconfig                              |  1 +
  arch/arm64/Kconfig                            |  1 +
  arch/arm64/include/asm/tlbbatch.h             | 12 ++++++++++
  arch/arm64/include/asm/tlbflush.h             | 23 +++++++++++++++++--
  arch/loongarch/Kconfig                        |  1 +
  arch/mips/Kconfig                             |  1 +
  arch/openrisc/Kconfig                         |  1 +
  arch/powerpc/Kconfig                          |  1 +
  arch/riscv/Kconfig                            |  1 +
  arch/s390/Kconfig                             |  1 +
  arch/um/Kconfig                               |  1 +
  arch/x86/Kconfig                              |  1 +
  arch/x86/include/asm/tlbflush.h               |  3 ++-
  mm/Kconfig                                    |  3 +++
  mm/rmap.c                                     | 14 +++++++----
  17 files changed, 59 insertions(+), 9 deletions(-)
  create mode 100644 arch/arm64/include/asm/tlbbatch.h

--
Best Regards!
Xin Hao




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux