After VMA lock-based page fault handling enabled, if bad access met under per-vma lock, it will fallback to mmap_lock-based handling, so it leads to unnessary mmap lock and vma find again. A test from lmbench shows 34% improve after this changes on arm64, lat_sig -P 1 prot lat_sig 0.29194 -> 0.19198 Only build test on other archs except arm64. v2: - a better changelog, and describe the counting changes, suggested by Suren Baghdasaryan - add RB Kefeng Wang (7): arm64: mm: cleanup __do_page_fault() arm64: mm: accelerate pagefault when VM_FAULT_BADACCESS arm: mm: accelerate pagefault when VM_FAULT_BADACCESS powerpc: mm: accelerate pagefault when badaccess riscv: mm: accelerate pagefault when badaccess s390: mm: accelerate pagefault when badaccess x86: mm: accelerate pagefault when badaccess arch/arm/mm/fault.c | 4 +++- arch/arm64/mm/fault.c | 31 ++++++++++--------------------- arch/powerpc/mm/fault.c | 33 ++++++++++++++++++++------------- arch/riscv/mm/fault.c | 5 ++++- arch/s390/mm/fault.c | 3 ++- arch/x86/mm/fault.c | 23 ++++++++++++++--------- 6 files changed, 53 insertions(+), 46 deletions(-) -- 2.27.0