On Tue, Sep 10, 2024 at 07:39:10AM +0300, Christoph Hellwig wrote: > All callers of iomap_zero_range already hold invalidate_lock, so we can't > take it again in iomap_file_buffered_write_punch_delalloc. > > Use the passed in flags argument to detect if we're called from a zeroing > operation and don't take the lock again in this case. > > Signed-off-by: Christoph Hellwig <hch@xxxxxx> > --- > fs/iomap/buffered-io.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > index 52f285ae4bddcb..3d7e69a542518a 100644 > --- a/fs/iomap/buffered-io.c > +++ b/fs/iomap/buffered-io.c > @@ -1188,8 +1188,13 @@ static void iomap_write_delalloc_release(struct inode *inode, loff_t start_byte, > * folios and dirtying them via ->page_mkwrite whilst we walk the > * cache and perform delalloc extent removal. Failing to do this can > * leave dirty pages with no space reservation in the cache. > + * > + * For zeroing operations the callers already hold invalidate_lock. > */ > - filemap_invalidate_lock(inode->i_mapping); > + if (flags & IOMAP_ZERO) > + rwsem_assert_held_write(&inode->i_mapping->invalidate_lock); Does the other iomap_zero_range user (gfs2) take the invalidate lock? AFAICT it doesn't. Shouldn't we annotate iomap_zero_range to say that callers have to hold i_rwsem and the invalidate_lock? --D > + else > + filemap_invalidate_lock(inode->i_mapping); > while (start_byte < scan_end_byte) { > loff_t data_end; > > @@ -1240,7 +1245,8 @@ static void iomap_write_delalloc_release(struct inode *inode, loff_t start_byte, > punch(inode, punch_start_byte, end_byte - punch_start_byte, > iomap); > out_unlock: > - filemap_invalidate_unlock(inode->i_mapping); > + if (!(flags & IOMAP_ZERO)) > + filemap_invalidate_unlock(inode->i_mapping); > } > > /* > -- > 2.45.2 > >