On 2022/9/15 6:18, Mike Kravetz wrote: > Create the new routine remove_inode_single_folio that will remove a > single folio from a file. This is refactored code from > remove_inode_hugepages. It checks for the uncommon case in which the > folio is still mapped and unmaps. > > No functional change. This refactoring will be put to use and expanded > upon in a subsequent patches. > > Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> LGTM with one nit below. Reviewed-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> > --- > fs/hugetlbfs/inode.c | 105 ++++++++++++++++++++++++++----------------- > 1 file changed, 63 insertions(+), 42 deletions(-) > > diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c > index edd69cc43ca5..7112a9a9f54d 100644 > --- a/fs/hugetlbfs/inode.c > +++ b/fs/hugetlbfs/inode.c > @@ -411,6 +411,60 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end, > } > } > > +/* > + * Called with hugetlb fault mutex held. > + * Returns true if page was actually removed, false otherwise. > + */ > +static bool remove_inode_single_folio(struct hstate *h, struct inode *inode, > + struct address_space *mapping, > + struct folio *folio, pgoff_t index, > + bool truncate_op) > +{ > + bool ret = false; > + > + /* > + * If folio is mapped, it was faulted in after being > + * unmapped in caller. Unmap (again) while holding > + * the fault mutex. The mutex will prevent faults > + * until we finish removing the folio. > + */ > + if (unlikely(folio_mapped(folio))) { > + i_mmap_lock_write(mapping); > + hugetlb_vmdelete_list(&mapping->i_mmap, > + index * pages_per_huge_page(h), > + (index + 1) * pages_per_huge_page(h), > + ZAP_FLAG_DROP_MARKER); > + i_mmap_unlock_write(mapping); > + } > + > + folio_lock(folio); > + /* > + * After locking page, make sure mapping is the same. > + * We could have raced with page fault populate and > + * backout code. Is this needed? remove_inode_single_folio() is called with hugetlb fault mutex held so it can't race with page fault code? Thanks, Miaohe Lin