This is a note to let you know that I've just added the patch titled ARM: 9328/1: mm: try VMA lock-based page fault handling first to the 6.7-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: arm-9328-1-mm-try-vma-lock-based-page-fault-handling.patch and it can be found in the queue-6.7 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 356935a2011365d59bbfc6728029166f964d39c3 Author: Wang Kefeng <wangkefeng.wang@xxxxxxxxxx> Date: Thu Oct 19 12:21:35 2023 +0100 ARM: 9328/1: mm: try VMA lock-based page fault handling first [ Upstream commit c16af1212479570454752671a170a1756e11fdfb ] Attempt VMA lock-based page fault handling first, and fall back to the existing mmap_lock-based handling if that fails, the ebizzy benchmark shows 25% improvement on qemu with 2 cpus. Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Signed-off-by: Russell King (Oracle) <rmk+kernel@xxxxxxxxxxxxxxx> Stable-dep-of: e870920bbe68 ("arch/arm/mm: fix major fault accounting when retrying under per-VMA lock") Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index f8567e95f98be..8f47d6762ea4b 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -35,6 +35,7 @@ config ARM select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT if CPU_V7 select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_HUGETLBFS if ARM_LPAE + select ARCH_SUPPORTS_PER_VMA_LOCK select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_MEMTEST diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index fef62e4a9edde..e96fb40b9cc32 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -278,6 +278,35 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + if (!(flags & FAULT_FLAG_USER)) + goto lock_mmap; + + vma = lock_vma_under_rcu(mm, addr); + if (!vma) + goto lock_mmap; + + if (!(vma->vm_flags & vm_flags)) { + vma_end_read(vma); + goto lock_mmap; + } + fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); + + if (!(fault & VM_FAULT_RETRY)) { + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + goto done; + } + count_vm_vma_lock_event(VMA_LOCK_RETRY); + + /* Quick path to respond to signals */ + if (fault_signal_pending(fault, regs)) { + if (!user_mode(regs)) + goto no_context; + return 0; + } +lock_mmap: + retry: vma = lock_mm_and_find_vma(mm, addr, regs); if (unlikely(!vma)) { @@ -316,6 +345,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) } mmap_read_unlock(mm); +done: /* * Handle the "normal" case first - VM_FAULT_MAJOR