On Sun, Mar 07, 2021 at 12:26:02PM -0800, Darrick J. Wong wrote: > From: Darrick J. Wong <djwong@xxxxxxxxxx> > > A recent log refactoring patchset from Brian Foster relaxed fsfreeze > behavior with regards to the buffer cache -- now freeze only waits for > pending buffer IO to finish, and does not try to drain the buffer cache > LRU. As a result, fsfreeze should no longer stall indefinitely while > fsmap runs. Drop the sb_start_write calls around fsmap invocations. > > Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx> > --- > fs/xfs/xfs_fsmap.c | 14 +++++--------- > 1 file changed, 5 insertions(+), 9 deletions(-) > > > diff --git a/fs/xfs/xfs_fsmap.c b/fs/xfs/xfs_fsmap.c > index 9ce5e7d5bf8f..34f2b971ce43 100644 > --- a/fs/xfs/xfs_fsmap.c > +++ b/fs/xfs/xfs_fsmap.c > @@ -904,14 +904,6 @@ xfs_getfsmap( > info.fsmap_recs = fsmap_recs; > info.head = head; > > - /* > - * If fsmap runs concurrently with a scrub, the freeze can be delayed > - * indefinitely as we walk the rmapbt and iterate over metadata > - * buffers. Freeze quiesces the log (which waits for the buffer LRU to > - * be emptied) and that won't happen while we're reading buffers. > - */ > - sb_start_write(mp->m_super); > - > /* For each device we support... */ > for (i = 0; i < XFS_GETFSMAP_DEVS; i++) { > /* Is this device within the range the user asked for? */ > @@ -934,6 +926,11 @@ xfs_getfsmap( > if (handlers[i].dev > head->fmh_keys[0].fmr_device) > memset(&dkeys[0], 0, sizeof(struct xfs_fsmap)); > > + /* > + * Grab an empty transaction so that we can use its recursive > + * buffer locking abilities to detect cycles in the rmapbt > + * without deadlocking. > + */ > error = xfs_trans_alloc_empty(mp, &tp); > if (error) > break; Took me a moment to work out that this is just adding a comment because it wasn't mentioned in the commit log. Somewhat unrelated to the bug fix but it's harmless so I don't see any need for you to do any extra work to respin this patch to remove it. Reviewed-by: Dave Chinner <dchinner@xxxxxxxxxx> Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx