On Fri, Feb 17, 2023 at 4:29 PM James Houghton <jthoughton@xxxxxxxxxx> wrote: > > Because it is safe to do so, do a full high-granularity page table walk > to check if the page is mapped. > > Signed-off-by: James Houghton <jthoughton@xxxxxxxxxx> > > diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c > index cfd09f95551b..c0ee69f0418e 100644 > --- a/fs/hugetlbfs/inode.c > +++ b/fs/hugetlbfs/inode.c > @@ -386,17 +386,24 @@ static void hugetlb_delete_from_page_cache(struct folio *folio) > static bool hugetlb_vma_maps_page(struct vm_area_struct *vma, > unsigned long addr, struct page *page) > { > - pte_t *ptep, pte; > + pte_t pte; > + struct hugetlb_pte hpte; > > - ptep = hugetlb_walk(vma, addr, huge_page_size(hstate_vma(vma))); > - if (!ptep) > + if (hugetlb_full_walk(&hpte, vma, addr)) > return false; > > - pte = huge_ptep_get(ptep); > + pte = huge_ptep_get(hpte.ptep); > if (huge_pte_none(pte) || !pte_present(pte)) > return false; > > - if (pte_page(pte) == page) > + if (unlikely(!hugetlb_pte_present_leaf(&hpte, pte))) > + /* > + * We raced with someone splitting us, and the only case > + * where this is impossible is when the pte was none. > + */ > + return false; > + > + if (compound_head(pte_page(pte)) == page) > return true; > > return false; > -- > 2.39.2.637.g21b0678d19-goog > I think this patch is actually incorrect. This function is *supposed* to check if the page is mapped at all in this VMA, but really we're only checking if the base address of the page is mapped. If we did the 'hugetlb_vma_maybe_maps_page' approach that I did previously and returned 'true' if !hugetlb_pte_present_leaf(), then this code would be correct again. But what I really think this function should do is just call page_vma_mapped_walk(). We're sort of reimplementing it here anyway. Unless someone disagrees, I'll do this for v3.