Hi Mike, > On 06/21/23 15:19, kernel test robot wrote: <snip> > I suspected this change could impact page_cache_next/prev_miss users, but had > no idea how much. > > Unless someone sees something wrong in 9425c591e06a, the best approach > might be to revert and then add a simple interface to check for 'folio at > a given index in the cache' as suggested by Ackerley Tng. > https://lore.kernel.org/linux-mm/98624c2f481966492b4eb8272aef747790229b73.1683069252.git.ackerleytng@xxxxxxxxxx/ Some findings in my side. 1. You patch impact the folio order for file readahead. I collect the histogram of order parameter to filemap_alloc_folio() call w/o your patch: With your patch: page order : count distribution 0 : 892073 | | 1 : 0 | | 2 : 65120457 |****************************************| 3 : 32914005 |******************** | 4 : 33020991 |******************** | Without your patch: page order : count distribution 0 : 3417288 |**** | 1 : 0 | | 2 : 877012 |* | 3 : 288 | | 4 : 5607522 |******* | 5 : 29974228 |****************************************| We could see the order 5 dominate the filemap folio without your patch. With your patch, order 2,3,4 are most used for filemap folio. 2. My understanding is your patch is correct and shouldn't be reverted. I made a small change based on your patch. The performance regression is gone. diff --git a/mm/readahead.c b/mm/readahead.c index 47afbca1d122..cca333f9b560 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -610,7 +610,7 @@ static void ondemand_readahead(struct readahead_control *ractl, pgoff_t start; rcu_read_lock(); - start = page_cache_next_miss(ractl->mapping, index + 1, + start = page_cache_next_miss(ractl->mapping, index, max_pages); rcu_read_unlock(); And the filemap folio order is restored also: page order : count distribution 0 : 3357622 |**** | 1 : 0 | | 2 : 861726 |* | 3 : 285 | | 4 : 4511637 |***** | 5 : 30505713 |****************************************| I still didn't figure out why this simple change can restore the performance. And why index + 1 was used. Will check more. Regards Yin, Fengwei