On Tue, Dec 06, 2022 at 12:39:53PM -0500, Peter Xu wrote: > On Tue, Dec 06, 2022 at 09:10:00AM -0800, Mike Kravetz wrote: > > On 12/05/22 15:52, Mike Kravetz wrote: > > > On 11/29/22 14:35, Peter Xu wrote: > > > > Since page_vma_mapped_walk() walks the pgtable, it needs the vma lock > > > > to make sure the pgtable page will not be freed concurrently. > > > > > > > > Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> > > > > --- > > > > include/linux/rmap.h | 4 ++++ > > > > mm/page_vma_mapped.c | 5 ++++- > > > > 2 files changed, 8 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > > > > index bd3504d11b15..a50d18bb86aa 100644 > > > > --- a/include/linux/rmap.h > > > > +++ b/include/linux/rmap.h > > > > @@ -13,6 +13,7 @@ > > > > #include <linux/highmem.h> > > > > #include <linux/pagemap.h> > > > > #include <linux/memremap.h> > > > > +#include <linux/hugetlb.h> > > > > > > > > /* > > > > * The anon_vma heads a list of private "related" vmas, to scan if > > > > @@ -408,6 +409,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) > > > > pte_unmap(pvmw->pte); > > > > if (pvmw->ptl) > > > > spin_unlock(pvmw->ptl); > > > > + /* This needs to be after unlock of the spinlock */ > > > > + if (is_vm_hugetlb_page(pvmw->vma)) > > > > + hugetlb_vma_unlock_read(pvmw->vma); > > > > } > > > > > > > > bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); > > > > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > > > > index 93e13fc17d3c..f94ec78b54ff 100644 > > > > --- a/mm/page_vma_mapped.c > > > > +++ b/mm/page_vma_mapped.c > > > > @@ -169,10 +169,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > > > > if (pvmw->pte) > > > > return not_found(pvmw); > > > > > > > > + hugetlb_vma_lock_read(vma); > > > > /* when pud is not present, pte will be NULL */ > > > > pvmw->pte = huge_pte_offset(mm, pvmw->address, size); > > > > - if (!pvmw->pte) > > > > + if (!pvmw->pte) { > > > > + hugetlb_vma_unlock_read(vma); > > > > return false; > > > > + } > > > > > > > > pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte); > > > > if (!check_pte(pvmw)) > > > > > > I think this is going to cause try_to_unmap() to always fail for hugetlb > > > shared pages. See try_to_unmap_one: > > > > > > while (page_vma_mapped_walk(&pvmw)) { > > > ... > > > if (folio_test_hugetlb(folio)) { > > > ... > > > /* > > > * To call huge_pmd_unshare, i_mmap_rwsem must be > > > * held in write mode. Caller needs to explicitly > > > * do this outside rmap routines. > > > * > > > * We also must hold hugetlb vma_lock in write mode. > > > * Lock order dictates acquiring vma_lock BEFORE > > > * i_mmap_rwsem. We can only try lock here and fail > > > * if unsuccessful. > > > */ > > > if (!anon) { > > > VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); > > > if (!hugetlb_vma_trylock_write(vma)) { > > > page_vma_mapped_walk_done(&pvmw); > > > ret = false; > > > } > > > > > > > > > Can not think of a great solution right now. > > > > Thought of this last night ... > > > > Perhaps we do not need vma_lock in this code path (not sure about all > > page_vma_mapped_walk calls). Why? We already hold i_mmap_rwsem. > > Exactly. The only concern is when it's not in a rmap. > > I'm actually preparing something that adds a new flag to PVMW, like: > > #define PVMW_HUGETLB_NEEDS_LOCK (1 << 2) > > But maybe we don't need that at all, since I had a closer look the only > outliers of not using a rmap is: > > __replace_page > write_protect_page > > I'm pretty sure ksm doesn't have hugetlb involved, then the other one is > uprobe (uprobe_write_opcode). I think it's the same. If it's true, we can > simply drop this patch. Then we also have hugetlb_walk and the lock checks > there guarantee that we're safe anyways. > > Potentially we can document this fact, which I also attached a comment > patch just for it to be appended to the end of the patchset. > > Mike, let me know what do you think. > > Andrew, if this patch to be dropped then the last patch may not cleanly > apply. Let me know if you want a full repost of the things. The document patch that can be appended to the end of this series attached. I referenced hugetlb_walk() so it needs to be the last patch. -- Peter Xu
>From 754c2180804e9e86accf131573cbd956b8c62829 Mon Sep 17 00:00:00 2001 From: Peter Xu <peterx@xxxxxxxxxx> Date: Tue, 6 Dec 2022 12:36:04 -0500 Subject: [PATCH] mm/hugetlb: Document why page_vma_mapped_walk() is safe to walk Content-type: text/plain Taking vma lock here is not needed for now because all potential hugetlb walkers here should have i_mmap_rwsem held. Document the fact. Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> --- mm/page_vma_mapped.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index e97b2e23bd28..2e59a0419d22 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -168,8 +168,14 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) /* The only possible mapping was handled on last iteration */ if (pvmw->pte) return not_found(pvmw); - - /* when pud is not present, pte will be NULL */ + /* + * NOTE: we don't need explicit lock here to walk the + * hugetlb pgtable because either (1) potential callers of + * hugetlb pvmw currently holds i_mmap_rwsem, or (2) the + * caller will not walk a hugetlb vma (e.g. ksm or uprobe). + * When one day this rule breaks, one will get a warning + * in hugetlb_walk(), and then we'll figure out what to do. + */ pvmw->pte = hugetlb_walk(vma, pvmw->address, size); if (!pvmw->pte) return false; -- 2.37.3