The quilt patch titled Subject: mm: hold the RCU read lock over calls to ->map_pages has been removed from the -mm tree. Its filename was mm-hold-the-rcu-read-lock-over-calls-to-map_pages.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Subject: mm: hold the RCU read lock over calls to ->map_pages Date: Mon, 27 Mar 2023 18:45:15 +0100 Prevent filesystems from doing things which sleep in their map_pages method. This is in preparation for a pagefault path protected only by RCU. Link: https://lkml.kernel.org/r/20230327174515.1811532-4-willy@xxxxxxxxxxxxx Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Darrick J. Wong <djwong@xxxxxxxxxx> Cc: Dave Chinner <david@xxxxxxxxxxxxx> Cc: David Howells <dhowells@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- Documentation/filesystems/locking.rst | 4 ++-- mm/memory.c | 11 ++++++++--- 2 files changed, 10 insertions(+), 5 deletions(-) --- a/Documentation/filesystems/locking.rst~mm-hold-the-rcu-read-lock-over-calls-to-map_pages +++ a/Documentation/filesystems/locking.rst @@ -645,7 +645,7 @@ ops mmap_lock PageLocked(page) open: yes close: yes fault: yes can return with page locked -map_pages: yes +map_pages: read page_mkwrite: yes can return with page locked pfn_mkwrite: yes access: yes @@ -661,7 +661,7 @@ locked. The VM will unlock the page. ->map_pages() is called when VM asks to map easy accessible pages. Filesystem should find and map pages associated with offsets from "start_pgoff" -till "end_pgoff". ->map_pages() is called with page table locked and must +till "end_pgoff". ->map_pages() is called with the RCU lock held and must not block. If it's not possible to reach a page without blocking, filesystem should skip it. Filesystem should use do_set_pte() to setup page table entry. Pointer to entry associated with the page is passed in --- a/mm/memory.c~mm-hold-the-rcu-read-lock-over-calls-to-map_pages +++ a/mm/memory.c @@ -4450,6 +4450,7 @@ static vm_fault_t do_fault_around(struct /* The page offset of vmf->address within the VMA. */ pgoff_t vma_off = vmf->pgoff - vmf->vma->vm_pgoff; pgoff_t from_pte, to_pte; + vm_fault_t ret; /* The PTE offset of the start address, clamped to the VMA. */ from_pte = max(ALIGN_DOWN(pte_off, nr_pages), @@ -4465,9 +4466,13 @@ static vm_fault_t do_fault_around(struct return VM_FAULT_OOM; } - return vmf->vma->vm_ops->map_pages(vmf, - vmf->pgoff + from_pte - pte_off, - vmf->pgoff + to_pte - pte_off); + rcu_read_lock(); + ret = vmf->vma->vm_ops->map_pages(vmf, + vmf->pgoff + from_pte - pte_off, + vmf->pgoff + to_pte - pte_off); + rcu_read_unlock(); + + return ret; } /* Return true if we should do read fault-around, false otherwise */ _ Patches currently in -mm which might be from willy@xxxxxxxxxxxxx are hugetlb-remove-pageheadhuge.patch