The patch titled Subject: mm/hugetlb: make page_vma_mapped_walk() safe to pmd unshare has been added to the -mm mm-unstable branch. Its filename is mm-hugetlb-make-page_vma_mapped_walk-safe-to-pmd-unshare.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-hugetlb-make-page_vma_mapped_walk-safe-to-pmd-unshare.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Peter Xu <peterx@xxxxxxxxxx> Subject: mm/hugetlb: make page_vma_mapped_walk() safe to pmd unshare Date: Tue, 29 Nov 2022 14:35:25 -0500 Since page_vma_mapped_walk() walks the pgtable, it needs the vma lock to make sure the pgtable page will not be freed concurrently. Link: https://lkml.kernel.org/r/20221129193526.3588187-10-peterx@xxxxxxxxxx Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: James Houghton <jthoughton@xxxxxxxxxx> Cc: Jann Horn <jannh@xxxxxxxxxx> Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx> Cc: Nadav Amit <nadav.amit@xxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/rmap.h | 4 ++++ mm/page_vma_mapped.c | 5 ++++- 2 files changed, 8 insertions(+), 1 deletion(-) --- a/include/linux/rmap.h~mm-hugetlb-make-page_vma_mapped_walk-safe-to-pmd-unshare +++ a/include/linux/rmap.h @@ -13,6 +13,7 @@ #include <linux/highmem.h> #include <linux/pagemap.h> #include <linux/memremap.h> +#include <linux/hugetlb.h> /* * The anon_vma heads a list of private "related" vmas, to scan if @@ -408,6 +409,9 @@ static inline void page_vma_mapped_walk_ pte_unmap(pvmw->pte); if (pvmw->ptl) spin_unlock(pvmw->ptl); + /* This needs to be after unlock of the spinlock */ + if (is_vm_hugetlb_page(pvmw->vma)) + hugetlb_vma_unlock_read(pvmw->vma); } bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); --- a/mm/page_vma_mapped.c~mm-hugetlb-make-page_vma_mapped_walk-safe-to-pmd-unshare +++ a/mm/page_vma_mapped.c @@ -169,10 +169,13 @@ bool page_vma_mapped_walk(struct page_vm if (pvmw->pte) return not_found(pvmw); + hugetlb_vma_lock_read(vma); /* when pud is not present, pte will be NULL */ pvmw->pte = huge_pte_offset(mm, pvmw->address, size); - if (!pvmw->pte) + if (!pvmw->pte) { + hugetlb_vma_unlock_read(vma); return false; + } pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte); if (!check_pte(pvmw)) _ Patches currently in -mm which might be from peterx@xxxxxxxxxx are mm-migrate-fix-read-only-page-got-writable-when-recover-pte.patch mm-always-compile-in-pte-markers.patch mm-use-pte-markers-for-swap-errors.patch mm-uffd-sanity-check-write-bit-for-uffd-wp-protected-ptes.patch selftests-vm-use-memfd-for-hugepage-mmap-test.patch mm-thp-re-apply-mkdirty-for-small-pages-after-split.patch mm-hugetlb-let-vma_offset_start-to-return-start.patch mm-hugetlb-dont-wait-for-migration-entry-during-follow-page.patch mm-hugetlb-document-huge_pte_offset-usage.patch mm-hugetlb-move-swap-entry-handling-into-vma-lock-when-faulted.patch mm-hugetlb-make-userfaultfd_huge_must_wait-safe-to-pmd-unshare.patch mm-hugetlb-make-hugetlb_follow_page_mask-safe-to-pmd-unshare.patch mm-hugetlb-make-follow_hugetlb_page-safe-to-pmd-unshare.patch mm-hugetlb-make-walk_hugetlb_range-safe-to-pmd-unshare.patch mm-hugetlb-make-page_vma_mapped_walk-safe-to-pmd-unshare.patch mm-hugetlb-introduce-hugetlb_walk.patch