The patch titled Subject: readahead: properly shorten readahead when falling back to do_page_cache_ra() has been added to the -mm mm-unstable branch. Its filename is readahead-properly-shorten-readahead-when-falling-back-to-do_page_cache_ra.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/readahead-properly-shorten-readahead-when-falling-back-to-do_page_cache_ra.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Jan Kara <jack@xxxxxxx> Subject: readahead: properly shorten readahead when falling back to do_page_cache_ra() Date: Wed, 4 Dec 2024 19:10:16 +0100 When we succeed in creating some folios in page_cache_ra_order() but then need to fallback to single page folios, we don't shorten the amount to read passed to do_page_cache_ra() by the amount we've already read. This then results in reading more and also in placing another readahead mark in the middle of the readahead window which confuses readahead code. Fix the problem by properly reducing number of pages to read. Unlike previous attempt at this fix (commit 7c877586da31) which had to be reverted, we are now careful to check there is indeed something to read so that we don't submit negative-sized readahead. Link: https://lkml.kernel.org/r/20241204181016.15273-3-jack@xxxxxxx Signed-off-by: Jan Kara <jack@xxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/readahead.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) --- a/mm/readahead.c~readahead-properly-shorten-readahead-when-falling-back-to-do_page_cache_ra +++ a/mm/readahead.c @@ -450,7 +450,8 @@ void page_cache_ra_order(struct readahea struct file_ra_state *ra, unsigned int new_order) { struct address_space *mapping = ractl->mapping; - pgoff_t index = readahead_index(ractl); + pgoff_t start = readahead_index(ractl); + pgoff_t index = start; unsigned int min_order = mapping_min_folio_order(mapping); pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT; pgoff_t mark = index + ra->size - ra->async_size; @@ -508,12 +509,18 @@ void page_cache_ra_order(struct readahea /* * If there were already pages in the page cache, then we may have * left some gaps. Let the regular readahead code take care of this - * situation. + * situation below. */ if (!err) return; fallback: - do_page_cache_ra(ractl, ra->size, ra->async_size); + /* + * ->readahead() may have updated readahead window size so we have to + * check there's still something to read. + */ + if (ra->size > index - start) + do_page_cache_ra(ractl, ra->size - (index - start), + ra->async_size); } static unsigned long ractl_max_pages(struct readahead_control *ractl, _ Patches currently in -mm which might be from jack@xxxxxxx are revert-readahead-properly-shorten-readahead-when-falling-back-to-do_page_cache_ra.patch readahead-dont-shorted-readahead-window-in-read_pages.patch readahead-properly-shorten-readahead-when-falling-back-to-do_page_cache_ra.patch