Originally the callers of find_get_entries() and find_lock_entries() were keeping track of the start index themselves as they traverse the search range range. This resulted in hacky code such as in shmem_undo_range(): index = folio->index + folio_nr_pages(folio) - 1; where the - 1 is only present to stay in the right spot after incrementing index later. This sort of calculation was also being done on every folio despite not even using index later within that function. The first two patches change find_get_entries() and find_lock_entries() to calculate the new index instead of leaving it to the callers so we can avoid all these complications. Furthermore, the indices array is almost exclusively used for the calculations of index mentioned above. Now that those calculations are no longer occuring, the indices array serves no purpose aside from tracking the xarray index of a folio which is also no longer needed. Each folio already keeps track of its index and can be accessed using folio->index instead. The last 2 patches remove the indices arrays from the calling functions: truncate_inode_pages_range(), invalidate_inode_pages2_range(), invalidate_mapping_pagevec(), and shmem_undo_range(). Vishal Moola (Oracle) (4): filemap: find_lock_entries() now updates start offset filemap: find_get_entries() now updates start offset truncate: Remove indices argument from truncate_folio_batch_exceptionals() filemap: Remove indices argument from find_lock_entries() and find_get_entries() mm/filemap.c | 40 ++++++++++++++++++++++++++++----------- mm/internal.h | 8 ++++---- mm/shmem.c | 23 +++++++---------------- mm/truncate.c | 52 +++++++++++++++++++-------------------------------- 4 files changed, 59 insertions(+), 64 deletions(-) -- 2.36.1