On Thu, 2024-12-12 at 15:55 +0100, Greg Kroah-Hartman wrote: > 6.12-stable review patch. If anyone has any objections, please let > me know. > > ------------------ > > From: Jan Kara <jack@xxxxxxx> > > commit a220d6b95b1ae12c7626283d7609f0a1438e6437 upstream. > > This reverts commit 7c877586da3178974a8a94577b6045a48377ff25. Isn't that moot now with 0938b1614648d5fbd832449a5a8a1b51d985323d that in Linus's tree? It's not in 6.12 (yet?). It may be worth backporting 0938b1614 to the stable tree, but it's beyond my pay grade. Phil. https://lore.kernel.org/all/20241017062342.478973-1-kernel@xxxxxxxxxxxxxxxx/T/#u commit 0938b1614648d5fbd832449a5a8a1b51d985323d Author: Pankaj Raghav <p.raghav@xxxxxxxxxxx> Date: Thu Oct 17 08:23:42 2024 +0200 mm: don't set readahead flag on a folio when lookahead_size > nr_to_read The readahead flag is set on a folio based on the lookahead_size and nr_to_read. For example, when the readahead happens from index to index + nr_to_read, then the readahead `mark` offset from index is set at nr_to_read - lookahead_size. There are some scenarios where the lookahead_size > nr_to_read. For example, readahead window was created, but the file was truncated before the readahead starts. do_page_cache_ra() will clamp the nr_to_read if the readahead window extends beyond EOF after truncation. If this happens, readahead flag should not be set on any folio on the current readahead window. The current calculation for `mark` with mapping_min_order > 0 gives incorrect results when lookahead_size > nr_to_read due to rounding up operation: index = 128 nr_to_read = 16 lookahead_size = 28 mapping_min_order = 4 (16 pages) ra_folio_index = round_up(128 + 16 - 28, 16) = 128; mark = 128 - 128 = 0; # offset from index to set RA flag In the above example, the lookahead_size is actually lying outside the current readahead window. Without this patch, RA flag will be set incorrectly on the folio at index 128. This can lead to marking the readahead flag on the wrong folio, therefore, triggering a readahead when it is not necessary. Explicitly initialize `mark` to be ULONG_MAX and only calculate it when lookahead_size is within the readahead window. diff --git a/mm/readahead.c b/mm/readahead.c index 3dc6c7a128dd..475d2940a1ed 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -206,9 +206,9 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, unsigned long nr_to_read, unsigned long lookahead_size) { struct address_space *mapping = ractl->mapping; - unsigned long ra_folio_index, index = readahead_index(ractl); + unsigned long index = readahead_index(ractl); gfp_t gfp_mask = readahead_gfp_mask(mapping); - unsigned long mark, i = 0; + unsigned long mark = ULONG_MAX, i = 0; unsigned int min_nrpages = mapping_min_folio_nrpages(mapping); /* @@ -232,9 +232,14 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, * index that only has lookahead or "async_region" to set the * readahead flag. */ - ra_folio_index = round_up(readahead_index(ractl) + nr_to_read - lookahead_size, - min_nrpages); - mark = ra_folio_index - index; + if (lookahead_size <= nr_to_read) { + unsigned long ra_folio_index; + + ra_folio_index = round_up(readahead_index(ractl) + + nr_to_read - lookahead_size, + min_nrpages); + mark = ra_folio_index - index; + } nr_to_read += readahead_index(ractl) - index; ractl->_index = index;