The patch titled Subject: readahead: drop pointless index from force_page_cache_ra() has been added to the -mm mm-unstable branch. Its filename is readahead-drop-pointless-index-from-force_page_cache_ra.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/readahead-drop-pointless-index-from-force_page_cache_ra.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Jan Kara <jack@xxxxxxx> Subject: readahead: drop pointless index from force_page_cache_ra() Date: Tue, 25 Jun 2024 12:18:54 +0200 Current index to readahead is tracked in readahead_control and properly updated by page_cache_ra_unbounded() (read_pages() in fact). So there's no need to track the index separately in force_page_cache_ra(). Link: https://lkml.kernel.org/r/20240625101909.12234-4-jack@xxxxxxx Signed-off-by: Jan Kara <jack@xxxxxxx> Reviewed-by: Josef Bacik <josef@xxxxxxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/readahead.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) --- a/mm/readahead.c~readahead-drop-pointless-index-from-force_page_cache_ra +++ a/mm/readahead.c @@ -313,7 +313,7 @@ void force_page_cache_ra(struct readahea struct address_space *mapping = ractl->mapping; struct file_ra_state *ra = ractl->ra; struct backing_dev_info *bdi = inode_to_bdi(mapping->host); - unsigned long max_pages, index; + unsigned long max_pages; if (unlikely(!mapping->a_ops->read_folio && !mapping->a_ops->readahead)) return; @@ -322,7 +322,6 @@ void force_page_cache_ra(struct readahea * If the request exceeds the readahead window, allow the read to * be up to the optimal hardware IO size */ - index = readahead_index(ractl); max_pages = max_t(unsigned long, bdi->io_pages, ra->ra_pages); nr_to_read = min_t(unsigned long, nr_to_read, max_pages); while (nr_to_read) { @@ -330,10 +329,8 @@ void force_page_cache_ra(struct readahea if (this_chunk > nr_to_read) this_chunk = nr_to_read; - ractl->_index = index; do_page_cache_ra(ractl, this_chunk, 0); - index += this_chunk; nr_to_read -= this_chunk; } } _ Patches currently in -mm which might be from jack@xxxxxxx are revert-mm-writeback-fix-possible-divide-by-zero-in-wb_dirty_limits-again.patch mm-avoid-overflows-in-dirty-throttling-logic.patch readahead-make-sure-sync-readahead-reads-needed-page.patch filemap-fix-page_cache_next_miss-when-no-hole-found.patch readahead-properly-shorten-readahead-when-falling-back-to-do_page_cache_ra.patch readahead-drop-pointless-index-from-force_page_cache_ra.patch readahead-drop-index-argument-of-page_cache_async_readahead.patch readahead-drop-dead-code-in-page_cache_ra_order.patch readahead-drop-dead-code-in-ondemand_readahead.patch readahead-disentangle-async-and-sync-readahead.patch readahead-fold-try_context_readahead-into-its-single-caller.patch readahead-simplify-gotos-in-page_cache_sync_ra.patch