The commit 9425c591e06a ("page cache: fix page_cache_next/prev_miss off by one") updated the page_cache_next_miss() to return the index beyond range. But it breaks the start/size of ra in ondemand_readahead() because the offset by one is accumulated to readahead_index. As a consequence, not best readahead order is picked. Tracing of the order parameter of filemap_alloc_folio() showed: page order : count distribution 0 : 892073 | | 1 : 0 | | 2 : 65120457 |****************************************| 3 : 32914005 |******************** | 4 : 33020991 |******************** | with 9425c591e06a9. With parent commit: page order : count distribution 0 : 3417288 |**** | 1 : 0 | | 2 : 877012 |* | 3 : 288 | | 4 : 5607522 |******* | 5 : 29974228 |****************************************| Fix the issue by set correct start/size of ra in ondemand_readahead(). Reported-by: kernel test robot <oliver.sang@xxxxxxxxx> Closes: https://lore.kernel.org/oe-lkp/202306211346.1e9ff03e-oliver.sang@xxxxxxxxx Fixes: 9425c591e06a ("page cache: fix page_cache_next/prev_miss off by one") Signed-off-by: Yin Fengwei <fengwei.yin@xxxxxxxxx> --- mm/readahead.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 47afbca1d122e..a1b8c628851a9 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -614,11 +614,11 @@ static void ondemand_readahead(struct readahead_control *ractl, max_pages); rcu_read_unlock(); - if (!start || start - index > max_pages) + if (!start || start - index - 1 > max_pages) return; - ra->start = start; - ra->size = start - index; /* old async_size */ + ra->start = start - 1; + ra->size = start - index - 1; /* old async_size */ ra->size += req_size; ra->size = get_next_ra_size(ra, max_pages); ra->async_size = ra->size; -- 2.39.2