This is a note to let you know that I've just added the patch titled iomap: Fix iomap_adjust_read_range for plen calculation to the 6.6-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: iomap-fix-iomap_adjust_read_range-for-plen-calculati.patch and it can be found in the queue-6.6 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit f77fded7a1a7db5e35db2023a63d32431d9d612a Author: Ritesh Harjani (IBM) <ritesh.list@xxxxxxxxx> Date: Tue May 7 14:25:42 2024 +0530 iomap: Fix iomap_adjust_read_range for plen calculation [ Upstream commit f5ceb1bbc98c69536d4673a97315e8427e67de1b ] If the extent spans the block that contains i_size, we need to handle both halves separately so that we properly zero data in the page cache for blocks that are entirely outside of i_size. But this is needed only when i_size is within the current folio under processing. "orig_pos + length > isize" can be true for all folios if the mapped extent length is greater than the folio size. That is making plen to break for every folio instead of only the last folio. So use orig_plen for checking if "orig_pos + orig_plen > isize". Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@xxxxxxxxx> Link: https://lore.kernel.org/r/a32e5f9a4fcfdb99077300c4020ed7ae61d6e0f9.1715067055.git.ritesh.list@xxxxxxxxx Reviewed-by: Christoph Hellwig <hch@xxxxxx> Reviewed-by: Darrick J. Wong <djwong@xxxxxxxxxx> Reviewed-by: Jan Kara <jack@xxxxxxx> cc: Ojaswin Mujoo <ojaswin@xxxxxxxxxxxxx> Signed-off-by: Christian Brauner <brauner@xxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 129a85633797a..975fd88c1f0f4 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -201,6 +201,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, unsigned block_size = (1 << block_bits); size_t poff = offset_in_folio(folio, *pos); size_t plen = min_t(loff_t, folio_size(folio) - poff, length); + size_t orig_plen = plen; unsigned first = poff >> block_bits; unsigned last = (poff + plen - 1) >> block_bits; @@ -237,7 +238,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, * handle both halves separately so that we properly zero data in the * page cache for blocks that are entirely outside of i_size. */ - if (orig_pos <= isize && orig_pos + length > isize) { + if (orig_pos <= isize && orig_pos + orig_plen > isize) { unsigned end = offset_in_folio(folio, isize - 1) >> block_bits; if (first <= end && last > end)