On Tue, Aug 27, 2024 at 07:09:48AM +0200, Christoph Hellwig wrote: > When direct I/O completions invalidates the page cache it holds neither the > i_rwsem nor the invalidate_lock so it can be racing with > iomap_write_delalloc_release. If the search for the end of the region that > contains data returns the start offset we hit such a race and just need to > look for the end of the newly created hole instead. > > Signed-off-by: Christoph Hellwig <hch@xxxxxx> > --- > fs/iomap/buffered-io.c | 10 +++++++++- > 1 file changed, 9 insertions(+), 1 deletion(-) > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > index f420c53d86acc5..69a931de1979b9 100644 > --- a/fs/iomap/buffered-io.c > +++ b/fs/iomap/buffered-io.c > @@ -1241,7 +1241,15 @@ static int iomap_write_delalloc_release(struct inode *inode, > error = data_end; > goto out_unlock; > } > - WARN_ON_ONCE(data_end <= start_byte); > + > + /* > + * If we race with post-direct I/O invalidation of the page cache, > + * there might be no data left at start_byte. > + */ > + if (data_end == start_byte) > + continue; Is there any chance that we could get stuck in a loop here? I think it's the case that if SEEK_HOLE returns data_end == start_byte, then the next time through the loop, the SEEK_DATA will return something that is > start_byte. Unless someone is very rapidly writing and punching the page cache? Hmm but then if *xfs* is punching delalloc then we're we holding the iolock so who else could be doing that? If the answers are 'no' and 'nobody' then Reviewed-by: Darrick J. Wong <djwong@xxxxxxxxxx> --D > + > + WARN_ON_ONCE(data_end < start_byte); > WARN_ON_ONCE(data_end > scan_end_byte); > > error = iomap_write_delalloc_scan(inode, &punch_start_byte, > -- > 2.43.0 > >