On Tue 07-05-24 14:25:42, Ritesh Harjani (IBM) wrote: > If the extent spans the block that contains i_size, we need to handle > both halves separately so that we properly zero data in the page cache > for blocks that are entirely outside of i_size. But this is needed only > when i_size is within the current folio under processing. > "orig_pos + length > isize" can be true for all folios if the mapped > extent length is greater than the folio size. That is making plen to > break for every folio instead of only the last folio. > > So use orig_plen for checking if "orig_pos + orig_plen > isize". > > Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@xxxxxxxxx> > cc: Ojaswin Mujoo <ojaswin@xxxxxxxxxxxxx> > Reviewed-by: Christoph Hellwig <hch@xxxxxx> > Reviewed-by: Darrick J. Wong <djwong@xxxxxxxxxx> Looks good. Feel free to add: Reviewed-by: Jan Kara <jack@xxxxxxx> Honza > --- > fs/iomap/buffered-io.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > index 4e8e41c8b3c0..9f79c82d1f73 100644 > --- a/fs/iomap/buffered-io.c > +++ b/fs/iomap/buffered-io.c > @@ -241,6 +241,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, > unsigned block_size = (1 << block_bits); > size_t poff = offset_in_folio(folio, *pos); > size_t plen = min_t(loff_t, folio_size(folio) - poff, length); > + size_t orig_plen = plen; > unsigned first = poff >> block_bits; > unsigned last = (poff + plen - 1) >> block_bits; > > @@ -277,7 +278,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, > * handle both halves separately so that we properly zero data in the > * page cache for blocks that are entirely outside of i_size. > */ > - if (orig_pos <= isize && orig_pos + length > isize) { > + if (orig_pos <= isize && orig_pos + orig_plen > isize) { > unsigned end = offset_in_folio(folio, isize - 1) >> block_bits; > > if (first <= end && last > end) > -- > 2.44.0 > -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR