On Tue, Dec 07, 2021 at 10:35:45AM -0800, Darrick J. Wong wrote: > From: Darrick J. Wong <djwong@xxxxxxxxxx> > > As part of multiple customer escalations due to file data corruption > after copy on write operations, I wrote some fstests that use fsstress > to hammer on COW to shake things loose. Regrettably, I caught some > filesystem shutdowns due to incorrect rmap operations with the following > loop: > > mount <filesystem> # (0) > fsstress <run only readonly ops> & # (1) > while true; do > fsstress <run all ops> > mount -o remount,ro # (2) > fsstress <run only readonly ops> > mount -o remount,rw # (3) > done > > When (2) happens, notice that (1) is still running. xfs_remount_ro will > call xfs_blockgc_stop to walk the inode cache to free all the COW > extents, but the blockgc mechanism races with (1)'s reader threads to > take IOLOCKs and loses, which means that it doesn't clean them all out. > Call such a file (A). > > When (3) happens, xfs_remount_rw calls xfs_reflink_recover_cow, which > walks the ondisk refcount btree and frees any COW extent that it finds. > This function does not check the inode cache, which means that incore > COW forks of inode (A) is now inconsistent with the ondisk metadata. If > one of those former COW extents are allocated and mapped into another > file (B) and someone triggers a COW to the stale reservation in (A), A's > dirty data will be written into (B) and once that's done, those blocks > will be transferred to (A)'s data fork without bumping the refcount. > > The results are catastrophic -- file (B) and the refcount btree are now > corrupt. Solve this race by forcing the xfs_blockgc_free_space to run > synchronously, which causes xfs_icwalk to return to inodes that were > skipped because the blockgc code couldn't take the IOLOCK. This is safe > to do here because the VFS has already prohibited new writer threads. > > Fixes: 10ddf64e420f ("xfs: remove leftover CoW reservations when remounting ro") > Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx> > --- > fs/xfs/xfs_super.c | 14 +++++++++++--- > 1 file changed, 11 insertions(+), 3 deletions(-) Looks good, I went through the analysis yesterday when you mentioned it on #xfs. Minor nit below, otherwise: Reviewed-by: Dave Chinner <dchinner@xxxxxxxxxx> > diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c > index e21459f9923a..0c07a4aef3b9 100644 > --- a/fs/xfs/xfs_super.c > +++ b/fs/xfs/xfs_super.c > @@ -1765,7 +1765,10 @@ static int > xfs_remount_ro( > struct xfs_mount *mp) > { > - int error; > + struct xfs_icwalk icw = { > + .icw_flags = XFS_ICWALK_FLAG_SYNC, > + }; > + int error; > > /* > * Cancel background eofb scanning so it cannot race with the final > @@ -1773,8 +1776,13 @@ xfs_remount_ro( > */ > xfs_blockgc_stop(mp); > > - /* Get rid of any leftover CoW reservations... */ > - error = xfs_blockgc_free_space(mp, NULL); > + /* > + * Clean out all remaining COW staging extents. This extra step is > + * done synchronously because the background blockgc worker could have > + * raced with a reader thread and failed to grab an IOLOCK. In that > + * case, the inode could still have post-eof and COW blocks. > + */ Rather than describe how inodes might be skipped here, the constraint we are operating under should be described. That is: /* * We need to clear out all remaining COW staging extents so * that we don't leave inodes requiring modifications during * inactivation and reclaim on a read-only mount. We must * check and process every inode currently in memory, hence * this requires a synchronous inode cache scan to be * executed. */ Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx