On 07/10/23 16:04, Sidhartha Kumar wrote: > Remove special cased hugetlb handling code within the page cache by > changing the granularity of each index to the base page size rather than > the huge page size. Adds new wrappers for hugetlb code to to interact with the > page cache which convert to a linear index. > > Signed-off-by: Sidhartha Kumar <sidhartha.kumar@xxxxxxxxxx> > --- > fs/hugetlbfs/inode.c | 10 +++++----- > include/linux/hugetlb.h | 12 ++++++++++++ > include/linux/pagemap.h | 26 ++------------------------ > mm/filemap.c | 36 +++++++++++------------------------- > mm/hugetlb.c | 11 ++++++----- > 5 files changed, 36 insertions(+), 59 deletions(-) > I still want to see a better explanation of the performance impact of this change as previously stated. However, I did take a closer look at the actual code changes and could not find any issues. One suggestion noted below. > diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c > index c2b807d37f852..d78c71dacf0d4 100644 > --- a/fs/hugetlbfs/inode.c > +++ b/fs/hugetlbfs/inode.c > @@ -663,20 +663,20 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart, > struct hstate *h = hstate_inode(inode); > struct address_space *mapping = &inode->i_data; > const pgoff_t start = lstart >> huge_page_shift(h); > - const pgoff_t end = lend >> huge_page_shift(h); > + const pgoff_t end = lend >> PAGE_SHIFT; The code is correct, but when looking at this it 'appears' wrong to have start and end in different size units. start is only used in the statement: if (truncate_op) (void)hugetlb_unreserve_pages(inode, start, LONG_MAX, freed); So, to avoid confusion it might be better getting rid of start and code the above as: if (truncate_op) (void)hugetlb_unreserve_pages(inode, lstart >> huge_page_shift(h), LONG_MAX, freed); > struct folio_batch fbatch; > pgoff_t next, index; > int i, freed = 0; > bool truncate_op = (lend == LLONG_MAX); > > folio_batch_init(&fbatch); > - next = start; > + next = lstart >> PAGE_SHIFT; > while (filemap_get_folios(mapping, &next, end - 1, &fbatch)) { > for (i = 0; i < folio_batch_count(&fbatch); ++i) { > struct folio *folio = fbatch.folios[i]; > u32 hash = 0; > > - index = folio->index; > + index = folio->index >> huge_page_order(h); > hash = hugetlb_fault_mutex_hash(mapping, index); > mutex_lock(&hugetlb_fault_mutex_table[hash]); > -- Mike Kravetz