Re: [RFC PATCH] btrfs: don't call btrfs_sync_file from iomap context

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/1/20 5:46 PM, Dave Chinner wrote:
On Tue, Sep 01, 2020 at 11:11:58AM -0400, Josef Bacik wrote:
On 9/1/20 9:06 AM, Johannes Thumshirn wrote:
This happens because iomap_dio_complete() calls into generic_write_sync()
if we have the data-sync flag set. But as we're still under the
inode_lock() from btrfs_file_write_iter() we will deadlock once
btrfs_sync_file() tries to acquire the inode_lock().

Calling into generic_write_sync() is not needed as __btrfs_direct_write()
already takes care of persisting the data on disk. We can temporarily drop
the IOCB_DSYNC flag before calling into __btrfs_direct_write() so the
iomap code won't try to call into the sync routines as well.

References: https://github.com/btrfs/fstests/issues/12
Fixes: da4d7c1b4c45 ("btrfs: switch to iomap for direct IO")
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@xxxxxxx>
---
   fs/btrfs/file.c | 5 ++++-
   1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index b62679382799..c75c0f2a5f72 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -2023,6 +2023,7 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
   		atomic_inc(&BTRFS_I(inode)->sync_writers);
   	if (iocb->ki_flags & IOCB_DIRECT) {
+		iocb->ki_flags &= ~IOCB_DSYNC;
   		num_written = __btrfs_direct_write(iocb, from);
   	} else {
   		num_written = btrfs_buffered_write(iocb, from);
@@ -2046,8 +2047,10 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
   	if (num_written > 0)
   		num_written = generic_write_sync(iocb, num_written);
-	if (sync)
+	if (sync) {
+		iocb->ki_flags |= IOCB_DSYNC;
   		atomic_dec(&BTRFS_I(inode)->sync_writers);
+	}
   out:
   	current->backing_dev_info = NULL;
   	return num_written ? num_written : err;


Christoph, I feel like this is broken.

No, it isn't broken, it's just a -different design- to the old
direct IO path. It was done this way done by design because the old
way of requiring separate paths for calling generic_write_sync() for
sync and AIO is ....  nasty, and doesn't allow for optimisation of
IO completion functionality that may be wholly dependent on
submission time inode state.

e.g. moving the O_DSYNC completion out of the context of the
IOMAP_F_DIRTY submission context means we can't reliably do FUA
writes to avoid calls to generic_write_sync() completely.
Compromising that functionality is going to cause major performance
regressions for high performance enterprise databases using O_DSYNC
AIO+DIO...

Xfs and ext4 get away with this for
different reasons,

No, they "don't get away with it", this is how it was designed to
work.


Didn't mean this as a slight, just saying this is why it works fine for you guys and doesn't work for us. Because when we first were looking at this we couldn't understand how it didn't blow up for you and it did blow up for us. I'm providing context, not saying you guys are broken or doing it wrong.

ext4 doesn't take the inode_lock() at all in fsync, and
xfs takes the ILOCK instead of the IOLOCK, so it's fine.  However btrfs uses
inode_lock() in ->fsync (not for the IO, just for the logging part).  A long
time ago I specifically pushed the inode locking down into ->fsync()
handlers to give us this sort of control.

I'm not 100% on the iomap stuff, but the fix seems like we need to move the
generic_write_sync() out of iomap_dio_complete() completely, and the callers
do their own thing, much like the normal generic_file_write_iter() does.

That effectively breaks O_DSYNC AIO and requires us to reintroduce
all the nasty code that the old direct IO path required both the
infrastructure and the filesystems to handle it. That's really not
acceptible solution to an internal btrfs locking issue...

And then I'd like to add a WARN_ON(lockdep_is_held()) in vfs_fsync_range()
so we can avoid this sort of thing in the future.  What do you think?

That's not going to work, either. There are filesystems that call
vfs_fsync_range() directly from under the inode_lock(). For example,
the fallocate() path in gfs2. And it's called under the ext4 and XFS
MMAPLOCK from the dax page fault path, which is the page fault
equivalent of the inode_lock(). IOWs, if you know that you aren't
going to take inode locks in your ->fsync() method, there's nothing
that says you cannot call vfs_fsync_range() while holding those
inode locks.

I converted ->fsync to not have the i_mutex taken before calling _years_ ago

02c24a82187d5a628c68edfe71ae60dc135cd178

and part of what I did was update the locking document around it. So in my head, the locking rules were "No VFS locks held on entry". Obviously that's not true today, but if we're going to change the assumptions around these things then we really ought to

1) Make sure they're true for _all_ file systems.
2) Document it when it's changed.

Ok so iomap was designed assuming it was safe to take the inode_lock() before calling ->fsync. That's fine, but this is kind of a bad way to find out. We really shouldn't have generic helper's that have different generic locking rules based on which file system uses them. Because then we end up with situations like this, where suddenly we're having to come up with some weird solution because the generic thing only works for a subset of file systems. Thanks,

Josef



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux