The patch titled Subject: mm/hugetlb: make walk_hugetlb_range() safe to pmd unshare has been added to the -mm mm-unstable branch. Its filename is mm-hugetlb-make-walk_hugetlb_range-safe-to-pmd-unshare.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-hugetlb-make-walk_hugetlb_range-safe-to-pmd-unshare.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Peter Xu <peterx@xxxxxxxxxx> Subject: mm/hugetlb: make walk_hugetlb_range() safe to pmd unshare Date: Tue, 29 Nov 2022 14:35:24 -0500 Since walk_hugetlb_range() walks the pgtable, it needs the vma lock to make sure the pgtable page will not be freed concurrently. Link: https://lkml.kernel.org/r/20221129193526.3588187-9-peterx@xxxxxxxxxx Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: James Houghton <jthoughton@xxxxxxxxxx> Cc: Jann Horn <jannh@xxxxxxxxxx> Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx> Cc: Nadav Amit <nadav.amit@xxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/pagewalk.c | 2 ++ 1 file changed, 2 insertions(+) --- a/mm/pagewalk.c~mm-hugetlb-make-walk_hugetlb_range-safe-to-pmd-unshare +++ a/mm/pagewalk.c @@ -302,6 +302,7 @@ static int walk_hugetlb_range(unsigned l const struct mm_walk_ops *ops = walk->ops; int err = 0; + hugetlb_vma_lock_read(vma); do { next = hugetlb_entry_end(h, addr, end); pte = huge_pte_offset(walk->mm, addr & hmask, sz); @@ -314,6 +315,7 @@ static int walk_hugetlb_range(unsigned l if (err) break; } while (addr = next, addr != end); + hugetlb_vma_unlock_read(vma); return err; } _ Patches currently in -mm which might be from peterx@xxxxxxxxxx are mm-migrate-fix-read-only-page-got-writable-when-recover-pte.patch mm-always-compile-in-pte-markers.patch mm-use-pte-markers-for-swap-errors.patch mm-uffd-sanity-check-write-bit-for-uffd-wp-protected-ptes.patch selftests-vm-use-memfd-for-hugepage-mmap-test.patch mm-thp-re-apply-mkdirty-for-small-pages-after-split.patch mm-hugetlb-let-vma_offset_start-to-return-start.patch mm-hugetlb-dont-wait-for-migration-entry-during-follow-page.patch mm-hugetlb-document-huge_pte_offset-usage.patch mm-hugetlb-move-swap-entry-handling-into-vma-lock-when-faulted.patch mm-hugetlb-make-userfaultfd_huge_must_wait-safe-to-pmd-unshare.patch mm-hugetlb-make-hugetlb_follow_page_mask-safe-to-pmd-unshare.patch mm-hugetlb-make-follow_hugetlb_page-safe-to-pmd-unshare.patch mm-hugetlb-make-walk_hugetlb_range-safe-to-pmd-unshare.patch mm-hugetlb-make-page_vma_mapped_walk-safe-to-pmd-unshare.patch mm-hugetlb-introduce-hugetlb_walk.patch