On Thu, Jul 14, 2022 at 3:29 PM Xin Hao <xhao@xxxxxxxxxxxxxxxxx> wrote: > > Hi barry. > > I do some test on Kunpeng arm64 machine use Unixbench. > > The test result as below. > > One core, we can see the performance improvement above +30%. I am really pleased to see the 30%+ improvement on unixbench on single core. > ./Run -c 1 -i 1 shell1 > w/o > System Benchmarks Partial Index BASELINE RESULT INDEX > Shell Scripts (1 concurrent) 42.4 5481.0 1292.7 > ======== > System Benchmarks Index Score (Partial Only) 1292.7 > > w/ > System Benchmarks Partial Index BASELINE RESULT INDEX > Shell Scripts (1 concurrent) 42.4 6974.6 1645.0 > ======== > System Benchmarks Index Score (Partial Only) 1645.0 > > > But with whole cores, there have little performance degradation above -5% That is sad as we might get more concurrency between mprotect(), madvise(), mremap(), zap_pte_range() and the deferred tlbi. > > ./Run -c 96 -i 1 shell1 > w/o > Shell Scripts (1 concurrent) 80765.5 lpm (60.0 s, 1 > samples) > System Benchmarks Partial Index BASELINE RESULT INDEX > Shell Scripts (1 concurrent) 42.4 80765.5 19048.5 > ======== > System Benchmarks Index Score (Partial Only) 19048.5 > > w > Shell Scripts (1 concurrent) 76333.6 lpm (60.0 s, 1 > samples) > System Benchmarks Partial Index BASELINE RESULT INDEX > Shell Scripts (1 concurrent) 42.4 76333.6 18003.2 > ======== > System Benchmarks Index Score (Partial Only) 18003.2 > > ---------------------------------------------------------------------------------------------- > > > After discuss with you, and do some changes in the patch. > > ndex a52381a680db..1ecba81f1277 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -727,7 +727,11 @@ void flush_tlb_batched_pending(struct mm_struct *mm) > int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT; > > if (pending != flushed) { > +#ifdef CONFIG_ARCH_HAS_MM_CPUMASK > flush_tlb_mm(mm); > +#else > + dsb(ish); > +#endif > i was guessing the problem might be flush_tlb_batched_pending() so i asked you to change this to verify my guess. /* > * If the new TLB flushing is pending during flushing, leave > * mm->tlb_flush_batched as is, to avoid losing flushing. > > there have a performance improvement with whole cores, above +30% But I don't think it is a proper patch. There is no guarantee the cpu calling flush_tlb_batched_pending is exactly the cpu sending the deferred tlbi. so the solution is unsafe. But since this temporary code can bring the 30%+ performance improvement back for high concurrency, we have huge potential to finally make it. Unfortunately I don't have an arm64 server to debug on this. I only have 8 cores which are unlikely to reproduce regression which happens in high concurrency with 96 parallel tasks. So I'd ask if @yicong or someone else working on kunpeng or other arm64 servers is able to actually debug and figure out a proper patch for this, then add the patch as 5/5 into this series? > > ./Run -c 96 -i 1 shell1 > 96 CPUs in system; running 96 parallel copies of tests > > Shell Scripts (1 concurrent) 109229.0 lpm (60.0 s, 1 samples) > System Benchmarks Partial Index BASELINE RESULT INDEX > Shell Scripts (1 concurrent) 42.4 109229.0 25761.6 > ======== > System Benchmarks Index Score (Partial Only) 25761.6 > > > Tested-by: Xin Hao<xhao@xxxxxxxxxxxxxxxxx> Thanks for your testing! > > Looking forward to your next version patch. > > On 7/11/22 11:46 AM, Barry Song wrote: > > Though ARM64 has the hardware to do tlb shootdown, the hardware > > broadcasting is not free. > > A simplest micro benchmark shows even on snapdragon 888 with only > > 8 cores, the overhead for ptep_clear_flush is huge even for paging > > out one page mapped by only one process: > > 5.36% a.out [kernel.kallsyms] [k] ptep_clear_flush > > > > While pages are mapped by multiple processes or HW has more CPUs, > > the cost should become even higher due to the bad scalability of > > tlb shootdown. > > > > The same benchmark can result in 16.99% CPU consumption on ARM64 > > server with around 100 cores according to Yicong's test on patch > > 4/4. > > > > This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by > > 1. only send tlbi instructions in the first stage - > > arch_tlbbatch_add_mm() > > 2. wait for the completion of tlbi by dsb while doing tlbbatch > > sync in arch_tlbbatch_flush() > > My testing on snapdragon shows the overhead of ptep_clear_flush > > is removed by the patchset. The micro benchmark becomes 5% faster > > even for one page mapped by single process on snapdragon 888. > > > > > > -v2: > > 1. Collected Yicong's test result on kunpeng920 ARM64 server; > > 2. Removed the redundant vma parameter in arch_tlbbatch_add_mm() > > according to the comments of Peter Zijlstra and Dave Hansen > > 3. Added ARCH_HAS_MM_CPUMASK rather than checking if mm_cpumask > > is empty according to the comments of Nadav Amit > > > > Thanks, Yicong, Peter, Dave and Nadav for your testing or reviewing > > , and comments. > > > > -v1: > > https://lore.kernel.org/lkml/20220707125242.425242-1-21cnbao@xxxxxxxxx/ > > > > Barry Song (4): > > Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't > > apply to ARM64" > > mm: rmap: Allow platforms without mm_cpumask to defer TLB flush > > mm: rmap: Extend tlbbatch APIs to fit new platforms > > arm64: support batched/deferred tlb shootdown during page reclamation > > > > Documentation/features/arch-support.txt | 1 - > > .../features/vm/TLB/arch-support.txt | 2 +- > > arch/arm/Kconfig | 1 + > > arch/arm64/Kconfig | 1 + > > arch/arm64/include/asm/tlbbatch.h | 12 ++++++++++ > > arch/arm64/include/asm/tlbflush.h | 23 +++++++++++++++++-- > > arch/loongarch/Kconfig | 1 + > > arch/mips/Kconfig | 1 + > > arch/openrisc/Kconfig | 1 + > > arch/powerpc/Kconfig | 1 + > > arch/riscv/Kconfig | 1 + > > arch/s390/Kconfig | 1 + > > arch/um/Kconfig | 1 + > > arch/x86/Kconfig | 1 + > > arch/x86/include/asm/tlbflush.h | 3 ++- > > mm/Kconfig | 3 +++ > > mm/rmap.c | 14 +++++++---- > > 17 files changed, 59 insertions(+), 9 deletions(-) > > create mode 100644 arch/arm64/include/asm/tlbbatch.h > > > -- > Best Regards! > Xin Hao > Thanks Barry