On Tue, Sep 27, 2022 at 09:54:27PM -0700, Darrick J. Wong wrote: > On Fri, Sep 23, 2022 at 10:04:03AM +1000, Dave Chinner wrote: > > On Wed, Sep 21, 2022 at 08:44:01PM -0700, Darrick J. Wong wrote: > > > On Wed, Sep 21, 2022 at 06:29:59PM +1000, Dave Chinner wrote: > > > > @@ -1182,9 +1210,26 @@ xfs_buffered_write_iomap_end( > > > > return 0; > > > > } > > > > > > > > +/* > > > > + * Check that the iomap passed to us is still valid for the given offset and > > > > + * length. > > > > + */ > > > > +static bool > > > > +xfs_buffered_write_iomap_valid( > > > > + struct inode *inode, > > > > + const struct iomap *iomap) > > > > +{ > > > > + int seq = *((int *)&iomap->private); > > > > + > > > > + if (seq != READ_ONCE(XFS_I(inode)->i_df.if_seq)) > > > > + return false; > > > > + return true; > > > > +} > > > > > > Wheee, thanks for tackling this one. :) > > > > I think this one might have a long way to run yet.... :/ > > It's gonna be a fun time backporting this all to 4.14. ;) Hopefully it won't be a huge issue, the current code is more contained to XFS and much less dependent on iomap iteration stuff... > Btw, can you share the reproducer? Not sure. The current reproducer I have is 2500 lines of complex C code that was originally based on a reproducer the original reporter provided. It does lots of stuff that isn't directly related to reproducing the issue, and will be impossible to review and maintain as it stands in fstests. I will probably end up cutting it down to just a simple program that reproduces the specific IO pattern that leads to the corruption (reverse sequential non-block-aligned writes), then use the fstest wrapper script to setup cgroup memory limits to cause writeback and memory reclaim to race with the non-block-aligned writes. We only need md5sums to detect corruption, so I think that the whole thing can be done in a couple of hundred lines of shell and C code. If I can reduce the write() IO pattern down to an xfs_io invocation, everythign can be done directly in the fstest script... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx