The patch titled Subject: mm: run the fault-around code under the VMA lock has been added to the -mm mm-unstable branch. Its filename is mm-run-the-fault-around-code-under-the-vma-lock.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-run-the-fault-around-code-under-the-vma-lock.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Subject: mm: run the fault-around code under the VMA lock Date: Mon, 24 Jul 2023 19:54:08 +0100 The map_pages fs method should be safe to run under the VMA lock instead of the mmap lock. This should have a measurable reduction in contention on the mmap lock. Link: https://lkml.kernel.org/r/20230724185410.1124082-9-willy@xxxxxxxxxxxxx Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Reviewed-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> Cc: Arjun Roy <arjunroy@xxxxxxxxxx> Cc: Eric Dumazet <edumazet@xxxxxxxxxx> Cc: Punit Agrawal <punit.agrawal@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) --- a/mm/memory.c~mm-run-the-fault-around-code-under-the-vma-lock +++ a/mm/memory.c @@ -4659,11 +4659,6 @@ static vm_fault_t do_read_fault(struct v vm_fault_t ret = 0; struct folio *folio; - if (vmf->flags & FAULT_FLAG_VMA_LOCK) { - vma_end_read(vmf->vma); - return VM_FAULT_RETRY; - } - /* * Let's call ->map_pages() first and use ->fault() as fallback * if page by the offset is not ready to be mapped (cold cache or @@ -4675,6 +4670,11 @@ static vm_fault_t do_read_fault(struct v return ret; } + if (vmf->flags & FAULT_FLAG_VMA_LOCK) { + vma_end_read(vmf->vma); + return VM_FAULT_RETRY; + } + ret = __do_fault(vmf); if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY))) return ret; _ Patches currently in -mm which might be from willy@xxxxxxxxxxxxx are rmap-pass-the-folio-to-__page_check_anon_rmap.patch highmem-add-memcpy_to_folio-and-memcpy_from_folio.patch affs-convert-affs_symlink_read_folio-to-use-the-folio.patch affs-convert-data-read-and-write-to-use-folios.patch migrate-use-folio_set_bh-instead-of-set_bh_page.patch ntfs3-convert-ntfs_get_block_vbo-to-use-a-folio.patch jbd2-use-a-folio-in-jbd2_journal_write_metadata_buffer.patch buffer-remove-set_bh_page.patch zswap-make-zswap_store-take-a-folio.patch memcg-convert-get_obj_cgroup_from_page-to-get_obj_cgroup_from_folio.patch swap-remove-some-calls-to-compound_head-in-swap_readpage.patch zswap-make-zswap_load-take-a-folio.patch mm-remove-config_per_vma_lock-ifdefs.patch mm-allow-per-vma-locks-on-file-backed-vmas.patch mm-move-fault_flag_vma_lock-check-from-handle_mm_fault.patch mm-handle-pud-faults-under-the-vma-lock.patch mm-handle-some-pmd-faults-under-the-vma-lock.patch mm-move-fault_flag_vma_lock-check-down-in-handle_pte_fault.patch mm-move-fault_flag_vma_lock-check-down-from-do_fault.patch mm-run-the-fault-around-code-under-the-vma-lock.patch mm-handle-swap-and-numa-pte-faults-under-the-vma-lock.patch mm-handle-faults-that-merely-update-the-accessed-bit-under-the-vma-lock.patch