On Mon 20-06-22 19:05:36, Alistair Popple wrote: > Commit 793917d997df ("mm/readahead: Add large folio readahead") > introduced support for using large folios for filebacked pages if the > filesystem supports it. > > page_cache_ra_order() was introduced to allocate and add these large > folios to the page cache. However adding pages to the page cache should > be serialized against truncation and hole punching by taking > invalidate_lock. Not doing so can lead to data races resulting in stale > data getting added to the page cache and marked up-to-date. See commit > 730633f0b7f9 ("mm: Protect operations adding pages to page cache with > invalidate_lock") for more details. > > This issue was found by inspection but a testcase revealed it was > possible to observe in practice on XFS. Fix this by taking > invalidate_lock in page_cache_ra_order(), to mirror what is done for the > non-thp case in page_cache_ra_unbounded(). > > Signed-off-by: Alistair Popple <apopple@xxxxxxxxxx> > Fixes: 793917d997df ("mm/readahead: Add large folio readahead") Thanks for catching this! Your fix looks good to me so feel free to add: Reviewed-by: Jan Kara <jack@xxxxxxx> Honza > --- > mm/readahead.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/mm/readahead.c b/mm/readahead.c > index 4a60cdb64262..38635af5bab7 100644 > --- a/mm/readahead.c > +++ b/mm/readahead.c > @@ -508,6 +508,7 @@ void page_cache_ra_order(struct readahead_control *ractl, > new_order--; > } > > + filemap_invalidate_lock_shared(mapping); > while (index <= limit) { > unsigned int order = new_order; > > @@ -534,6 +535,7 @@ void page_cache_ra_order(struct readahead_control *ractl, > } > > read_pages(ractl); > + filemap_invalidate_unlock_shared(mapping); > > /* > * If there were already pages in the page cache, then we may have > -- > 2.35.1 > -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR