The quilt patch titled Subject: arm64: mm: cleanup __do_page_fault() has been removed from the -mm tree. Its filename was arm64-mm-cleanup-__do_page_fault.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Subject: arm64: mm: cleanup __do_page_fault() Date: Wed, 3 Apr 2024 16:37:59 +0800 Patch series "arch/mm/fault: accelerate pagefault when badaccess", v2. After VMA lock-based page fault handling enabled, if bad access met under per-vma lock, it will fallback to mmap_lock-based handling, so it leads to unnessary mmap lock and vma find again. A test from lmbench shows 34% improve after this changes on arm64, lat_sig -P 1 prot lat_sig 0.29194 -> 0.19198 This patch (of 7): The __do_page_fault() only calls handle_mm_fault() after vm_flags checked, and it is only called by do_page_fault(), let's squash it into do_page_fault() to cleanup code. Link: https://lkml.kernel.org/r/20240403083805.1818160-1-wangkefeng.wang@xxxxxxxxxx Link: https://lkml.kernel.org/r/20240403083805.1818160-2-wangkefeng.wang@xxxxxxxxxx Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Reviewed-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> Reviewed-by: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Albert Ou <aou@xxxxxxxxxxxxxxxxx> Cc: Alexander Gordeev <agordeev@xxxxxxxxxxxxx> Cc: Andy Lutomirski <luto@xxxxxxxxxx> Cc: Christophe Leroy <christophe.leroy@xxxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: Gerald Schaefer <gerald.schaefer@xxxxxxxxxxxxx> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Cc: Nicholas Piggin <npiggin@xxxxxxxxx> Cc: Palmer Dabbelt <palmer@xxxxxxxxxxx> Cc: Paul Walmsley <paul.walmsley@xxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Russell King <linux@xxxxxxxxxxxxxxx> Cc: Will Deacon <will@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- arch/arm64/mm/fault.c | 27 +++++++-------------------- 1 file changed, 7 insertions(+), 20 deletions(-) --- a/arch/arm64/mm/fault.c~arm64-mm-cleanup-__do_page_fault +++ a/arch/arm64/mm/fault.c @@ -486,25 +486,6 @@ static void do_bad_area(unsigned long fa } } -#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000) -#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000) - -static vm_fault_t __do_page_fault(struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long addr, - unsigned int mm_flags, unsigned long vm_flags, - struct pt_regs *regs) -{ - /* - * Ok, we have a good vm_area for this memory access, so we can handle - * it. - * Check that the permissions on the VMA allow for the fault which - * occurred. - */ - if (!(vma->vm_flags & vm_flags)) - return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr, mm_flags, regs); -} - static bool is_el0_instruction_abort(unsigned long esr) { return ESR_ELx_EC(esr) == ESR_ELx_EC_IABT_LOW; @@ -519,6 +500,9 @@ static bool is_write_abort(unsigned long return (esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM); } +#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000) +#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000) + static int __kprobes do_page_fault(unsigned long far, unsigned long esr, struct pt_regs *regs) { @@ -617,7 +601,10 @@ retry: goto done; } - fault = __do_page_fault(mm, vma, addr, mm_flags, vm_flags, regs); + if (!(vma->vm_flags & vm_flags)) + fault = VM_FAULT_BADACCESS; + else + fault = handle_mm_fault(vma, addr, mm_flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { _ Patches currently in -mm which might be from wangkefeng.wang@xxxxxxxxxx are arm64-mm-drop-vm_fault_badmap-vm_fault_badaccess.patch arm-mm-drop-vm_fault_badmap-vm_fault_badaccess.patch mm-move-mm-counter-updating-out-of-set_pte_range.patch mm-filemap-batch-mm-counter-updating-in-filemap_map_pages.patch mm-swapfile-check-usable-swap-device-in-__folio_throttle_swaprate.patch mm-memory-check-userfaultfd_wp-in-vmf_orig_pte_uffd_wp.patch