On 2022/7/7 4:23, Mike Kravetz wrote: > Create the new routine hugetlb_unmap_file_folio that will unmap a single > file folio. This is refactored code from hugetlb_vmdelete_list. It is > modified to do locking within the routine itself and check whether the > page is mapped within a specific vma before unmapping. > > This refactoring will be put to use and expanded upon in a subsequent > patch adding vma specific locking. > > Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> > --- > fs/hugetlbfs/inode.c | 124 +++++++++++++++++++++++++++++++++---------- > 1 file changed, 95 insertions(+), 29 deletions(-) > > diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c > index 31bd4325fce5..0eac0ea2a245 100644 > --- a/fs/hugetlbfs/inode.c > +++ b/fs/hugetlbfs/inode.c > @@ -396,6 +396,94 @@ static int hugetlbfs_write_end(struct file *file, struct address_space *mapping, > return -EINVAL; > } > > +/* > + * Called with i_mmap_rwsem held for inode based vma maps. This makes > + * sure vma (and vm_mm) will not go away. We also hold the hugetlb fault > + * mutex for the page in the mapping. So, we can not race with page being > + * faulted into the vma. > + */ > +static bool hugetlb_vma_maps_page(struct vm_area_struct *vma, > + unsigned long addr, struct page *page) > +{ > + pte_t *ptep, pte; > + > + ptep = huge_pte_offset(vma->vm_mm, addr, > + huge_page_size(hstate_vma(vma))); > + > + if (!ptep) > + return false; > + > + pte = huge_ptep_get(ptep); > + if (huge_pte_none(pte) || !pte_present(pte)) > + return false; > + > + if (pte_page(pte) == page) > + return true; > + > + return false; /* WTH??? */ I'm sorry but what does WTH means? IIUC, this could happen if pte_page is a COW-ed private page? vma_interval_tree_foreach doesn't exclude the private mapping even after cow? Except from above (trivial one), this patch looks good to me. Thanks.