On Tue, Jun 18, 2024 at 08:56:53AM +0200, Hannes Reinecke wrote: > On 6/17/24 18:39, Pankaj Raghav (Samsung) wrote: > > On Mon, Jun 17, 2024 at 05:10:15PM +0100, Matthew Wilcox wrote: > > > On Mon, Jun 17, 2024 at 04:04:20PM +0000, Pankaj Raghav (Samsung) wrote: > > > > On Mon, Jun 17, 2024 at 01:32:42PM +0100, Matthew Wilcox wrote: > > > > So the following can still be there from Hannes patch as we have a > > > > stable reference: > > > > > > > > ractl->_workingset |= folio_test_workingset(folio); > > > > - ractl->_nr_pages++; > > > > + ractl->_nr_pages += folio_nr_pages(folio); > > > > + i += folio_nr_pages(folio); > > > > } > > > > > > We _can_, but we just allocated it, so we know what size it is already. > > Yes. > > > > > I'm starting to feel that Hannes' patch should be combined with this > > > one. > > > > Fine by me. @Hannes, is that ok with you? > > Sure. I was about to re-send my patchset anyway, so feel free to wrap it in. Is it ok if I add your Co-developed and Signed-off tag? This is what I have combining your patch with mine and making willy's changes: diff --git a/mm/readahead.c b/mm/readahead.c index 389cd802da63..f56da953c130 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -247,9 +247,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, struct folio *folio = xa_load(&mapping->i_pages, index + i); int ret; - if (folio && !xa_is_value(folio)) { - long nr_pages = folio_nr_pages(folio); /* * Page already present? Kick off the current batch * of contiguous pages before continuing with the @@ -259,18 +257,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, * not worth getting one just for that. */ read_pages(ractl); - - /* - * Move the ractl->_index by at least min_pages - * if the folio got truncated to respect the - * alignment constraint in the page cache. - * - */ - if (mapping != folio->mapping) - nr_pages = min_nrpages; - - VM_BUG_ON_FOLIO(nr_pages < min_nrpages, folio); - ractl->_index += nr_pages; + ractl->_index += min_nrpages; i = ractl->_index + ractl->_nr_pages - index; continue; } @@ -293,8 +280,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, if (i == mark) folio_set_readahead(folio); ractl->_workingset |= folio_test_workingset(folio); - ractl->_nr_pages += folio_nr_pages(folio); - i += folio_nr_pages(folio); + ractl->_nr_pages += min_nrpages; + i += min_nrpages; } /*