The patch titled Subject: mm: handle shared faults under the VMA lock has been added to the -mm mm-unstable branch. Its filename is mm-handle-shared-faults-under-the-vma-lock.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-handle-shared-faults-under-the-vma-lock.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Subject: mm: handle shared faults under the VMA lock Date: Fri, 6 Oct 2023 20:53:15 +0100 There are many implementations of ->fault and some of them depend on mmap_lock being held. All vm_ops that implement ->map_pages() end up calling filemap_fault(), which I have audited to be sure it does not rely on mmap_lock. So (for now) key off ->map_pages existing as a flag to indicate that it's safe to call ->fault while only holding the vma lock. Link: https://lkml.kernel.org/r/20231006195318.4087158-4-willy@xxxxxxxxxxxxx Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Suren Baghdasaryan <surenb@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) --- a/mm/memory.c~mm-handle-shared-faults-under-the-vma-lock +++ a/mm/memory.c @@ -3045,6 +3045,21 @@ static inline void wp_page_reuse(struct count_vm_event(PGREUSE); } +/* + * We could add a bitflag somewhere, but for now, we know that all + * vm_ops that have a ->map_pages have been audited and don't need + * the mmap_lock to be held. + */ +static inline vm_fault_t vmf_can_call_fault(const struct vm_fault *vmf) +{ + struct vm_area_struct *vma = vmf->vma; + + if (vma->vm_ops->map_pages || !(vmf->flags & FAULT_FLAG_VMA_LOCK)) + return 0; + vma_end_read(vma); + return VM_FAULT_RETRY; +} + static vm_fault_t vmf_anon_prepare(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; @@ -4672,10 +4687,9 @@ static vm_fault_t do_shared_fault(struct vm_fault_t ret, tmp; struct folio *folio; - if (vmf->flags & FAULT_FLAG_VMA_LOCK) { - vma_end_read(vma); - return VM_FAULT_RETRY; - } + ret = vmf_can_call_fault(vmf); + if (ret) + return ret; ret = __do_fault(vmf); if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY))) _ Patches currently in -mm which might be from willy@xxxxxxxxxxxxx are mm-make-lock_folio_maybe_drop_mmap-vma-lock-aware.patch mm-call-wp_page_copy-under-the-vma-lock.patch mm-handle-shared-faults-under-the-vma-lock.patch mm-handle-cow-faults-under-the-vma-lock.patch mm-handle-read-faults-under-the-vma-lock.patch mm-handle-write-faults-to-ro-pages-under-the-vma-lock.patch iomap-hold-state_lock-over-call-to-ifs_set_range_uptodate.patch iomap-protect-read_bytes_pending-with-the-state_lock.patch mm-add-folio_end_read.patch ext4-use-folio_end_read.patch buffer-use-folio_end_read.patch iomap-use-folio_end_read.patch bitops-add-xor_unlock_is_negative_byte.patch alpha-implement-xor_unlock_is_negative_byte.patch m68k-implement-xor_unlock_is_negative_byte.patch mips-implement-xor_unlock_is_negative_byte.patch powerpc-implement-arch_xor_unlock_is_negative_byte-on-32-bit.patch riscv-implement-xor_unlock_is_negative_byte.patch s390-implement-arch_xor_unlock_is_negative_byte.patch mm-delete-checks-for-xor_unlock_is_negative_byte.patch mm-add-folio_xor_flags_has_waiters.patch mm-make-__end_folio_writeback-return-void.patch mm-use-folio_xor_flags_has_waiters-in-folio_end_writeback.patch