On 9/2/20 8:20 AM, Dave Chinner wrote:
On Wed, Sep 02, 2020 at 12:44:14PM +0100, Matthew Wilcox wrote:
On Wed, Sep 02, 2020 at 09:58:30AM +1000, Dave Chinner wrote:
Put simply: converting a filesystem to use iomap is not a "change
the filesystem interfacing code and it will work" modification. We
ask that filesystems are modified to conform to the iomap IO
exclusion model; adding special cases for every potential
locking and mapping quirk every different filesystem has is part of
what turned the old direct IO code into an unmaintainable nightmare.
That's fine, but this is kind of a bad way to find
out. We really shouldn't have generic helper's that have different generic
locking rules based on which file system uses them.
We certainly can change the rules for new infrastructure. Indeed, we
had to change the rules to support DAX. The whole point of the
iomap infrastructure was that it enabled us to use code that already
worked for DAX (the XFS code) in multiple filesystems. And as people
have realised that the DIO via iomap is much faster than the old DIO
code and is a much more efficient way of doing large buffered IO,
other filesystems have started to use it.
However....
Because then we end up
with situations like this, where suddenly we're having to come up with some
weird solution because the generic thing only works for a subset of file
systems. Thanks,
.... we've always said "you need to change the filesystem code to
use iomap". This is simply a reflection on the fact that iomap has
different rules and constraints to the old code and so it's not a
direct plug in replacement. There are no short cuts here...
Can you point me (and I suspect Josef!) towards the documentation of the
locking model? I was hoping to find Documentation/filesystems/iomap.rst
but all the 'iomap' strings in Documentation/ refer to pci_iomap and
similar, except for this in the DAX documentation:
There's no locking model documentation because there is no locking
in the iomap direct IO code. The filesystem defines and does all the
locking, so there's pretty much nothing to document for iomap.
IOWs, the only thing iomap_dio_rw requires is that the IO completion
paths do not take same locks that the IO submission path
requires. And that's because:
/*
* iomap_dio_rw() always completes O_[D]SYNC writes regardless of whether the IO
* is being issued as AIO or not. [...]
So you obviously can't sit waiting for dio completion in
iomap_dio_rw() while holding the submission lock if completion
requires the submission lock to make progress.
FWIW, iomap_dio_rw() originally required the inode_lock() to be held
and contained a lockdep assert to verify this, but....
commit 3ad99bec6e82e32fa9faf2f84e74b134586b46f7
Author: Goldwyn Rodrigues <rgoldwyn@xxxxxxxx>
Date: Sat Nov 30 09:59:25 2019 -0600
iomap: remove lockdep_assert_held()
Filesystems such as btrfs can perform direct I/O without holding the
inode->i_rwsem in some of the cases like writing within i_size. So,
remove the check for lockdep_assert_held() in iomap_dio_rw().
Reviewed-by: Darrick J. Wong <darrick.wong@xxxxxxxxxx>
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@xxxxxxxx>
Signed-off-by: David Sterba <dsterba@xxxxxxxx>
... btrfs has special corner cases for direct IO locking and hence
we removed the lockdep assert....
IOWs, iomap_dio_rw() really does not care what strategy filesystems
use to serialise DIO against other operations. Filesystems can use
whatever IO serialisation mechanism they want (mutex, rwsem, range
locks, etc) as long as they obey the one simple requirement: do not
take the DIO submission lock in the DIO completion path.
Goldwyn has been working on these patches for a long time, and is
actually familiar with this code, and he missed that these two
interfaces are being mixed. This is a problem that I want to solve. He
didn't notice it in any of his testing, which IIRC was like 6 months to
get this stuff actually into the btrfs tree. If we're going to mix
interfaces then it should be blatantly obvious to developers that's
what's happening so the find out during development, not after the
patches have landed, and certainly not after they've made it out to
users. Thanks,
Josef