On 11/29/22 14:35, Peter Xu wrote: > Since page_vma_mapped_walk() walks the pgtable, it needs the vma lock > to make sure the pgtable page will not be freed concurrently. > > Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> > --- > include/linux/rmap.h | 4 ++++ > mm/page_vma_mapped.c | 5 ++++- > 2 files changed, 8 insertions(+), 1 deletion(-) > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index bd3504d11b15..a50d18bb86aa 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -13,6 +13,7 @@ > #include <linux/highmem.h> > #include <linux/pagemap.h> > #include <linux/memremap.h> > +#include <linux/hugetlb.h> > > /* > * The anon_vma heads a list of private "related" vmas, to scan if > @@ -408,6 +409,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) > pte_unmap(pvmw->pte); > if (pvmw->ptl) > spin_unlock(pvmw->ptl); > + /* This needs to be after unlock of the spinlock */ > + if (is_vm_hugetlb_page(pvmw->vma)) > + hugetlb_vma_unlock_read(pvmw->vma); > } > > bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index 93e13fc17d3c..f94ec78b54ff 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -169,10 +169,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > if (pvmw->pte) > return not_found(pvmw); > > + hugetlb_vma_lock_read(vma); > /* when pud is not present, pte will be NULL */ > pvmw->pte = huge_pte_offset(mm, pvmw->address, size); > - if (!pvmw->pte) > + if (!pvmw->pte) { > + hugetlb_vma_unlock_read(vma); > return false; > + } > > pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte); > if (!check_pte(pvmw)) I think this is going to cause try_to_unmap() to always fail for hugetlb shared pages. See try_to_unmap_one: while (page_vma_mapped_walk(&pvmw)) { ... if (folio_test_hugetlb(folio)) { ... /* * To call huge_pmd_unshare, i_mmap_rwsem must be * held in write mode. Caller needs to explicitly * do this outside rmap routines. * * We also must hold hugetlb vma_lock in write mode. * Lock order dictates acquiring vma_lock BEFORE * i_mmap_rwsem. We can only try lock here and fail * if unsuccessful. */ if (!anon) { VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); if (!hugetlb_vma_trylock_write(vma)) { page_vma_mapped_walk_done(&pvmw); ret = false; } Can not think of a great solution right now. -- Mike Kravetz