The patch titled Subject: mm: handle_pte_fault() use pte_offset_map_rw_nolock() has been added to the -mm mm-unstable branch. Its filename is mm-handle_pte_fault-use-pte_offset_map_rw_nolock.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-handle_pte_fault-use-pte_offset_map_rw_nolock.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx> Subject: mm: handle_pte_fault() use pte_offset_map_rw_nolock() Date: Thu, 26 Sep 2024 14:46:19 +0800 In handle_pte_fault(), we may modify the vmf->pte after acquiring the vmf->ptl, so convert it to using pte_offset_map_rw_nolock(). But since we will do the pte_same() check, so there is no need to get pmdval to do pmd_same() check, just pass a dummy variable to it. Link: https://lkml.kernel.org/r/af8d694853b44c5a6018403ae435440e275854c7.1727332572.git.zhengqi.arch@xxxxxxxxxxxxx Signed-off-by: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx> Acked-by: David Hildenbrand <david@xxxxxxxxxx> Reviewed-by: Muchun Song <muchun.song@xxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Mike Rapoport (Microsoft) <rppt@xxxxxxxxxx> Cc: Peter Xu <peterx@xxxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) --- a/mm/memory.c~mm-handle_pte_fault-use-pte_offset_map_rw_nolock +++ a/mm/memory.c @@ -5727,14 +5727,24 @@ static vm_fault_t handle_pte_fault(struc vmf->pte = NULL; vmf->flags &= ~FAULT_FLAG_ORIG_PTE_VALID; } else { + pmd_t dummy_pmdval; + /* * A regular pmd is established and it can't morph into a huge * pmd by anon khugepaged, since that takes mmap_lock in write * mode; but shmem or file collapse to THP could still morph * it into a huge pmd: just retry later if so. + * + * Use the maywrite version to indicate that vmf->pte may be + * modified, but since we will use pte_same() to detect the + * change of the !pte_none() entry, there is no need to recheck + * the pmdval. Here we chooes to pass a dummy variable instead + * of NULL, which helps new user think about why this place is + * special. */ - vmf->pte = pte_offset_map_nolock(vmf->vma->vm_mm, vmf->pmd, - vmf->address, &vmf->ptl); + vmf->pte = pte_offset_map_rw_nolock(vmf->vma->vm_mm, vmf->pmd, + vmf->address, &dummy_pmdval, + &vmf->ptl); if (unlikely(!vmf->pte)) return 0; vmf->orig_pte = ptep_get_lockless(vmf->pte); _ Patches currently in -mm which might be from zhengqi.arch@xxxxxxxxxxxxx are mm-pgtable-introduce-pte_offset_map_rorw_nolock.patch powerpc-assert_pte_locked-use-pte_offset_map_ro_nolock.patch mm-filemap-filemap_fault_recheck_pte_none-use-pte_offset_map_ro_nolock.patch mm-khugepaged-__collapse_huge_page_swapin-use-pte_offset_map_ro_nolock.patch arm-adjust_pte-use-pte_offset_map_rw_nolock.patch mm-handle_pte_fault-use-pte_offset_map_rw_nolock.patch mm-khugepaged-collapse_pte_mapped_thp-use-pte_offset_map_rw_nolock.patch mm-copy_pte_range-use-pte_offset_map_rw_nolock.patch mm-mremap-move_ptes-use-pte_offset_map_rw_nolock.patch mm-page_vma_mapped_walk-map_pte-use-pte_offset_map_rw_nolock.patch mm-userfaultfd-move_pages_pte-use-pte_offset_map_rw_nolock.patch mm-multi-gen-lru-walk_pte_range-use-pte_offset_map_rw_nolock.patch mm-pgtable-remove-pte_offset_map_nolock.patch