On Thu, Jan 13, 2022 at 08:37:01AM -0500, Brian Foster wrote: > We've had reports on distro (pre-deferred inactivation) kernels that > inode reclaim (i.e. via drop_caches) can deadlock on the s_umount > lock when invoked on a frozen XFS fs. This occurs because > drop_caches acquires the lock Eww, I hadn't even noticed drop_caches as a way in to a s_umount deadlock. Good catch! > and then blocks in xfs_inactive() on > transaction alloc for an inode that requires an eofb trim. unfreeze > then blocks on the same lock and the fs is deadlocked. > > With deferred inactivation, the deadlock problem is no longer > present because ->destroy_inode() no longer blocks whether the fs is > frozen or not. There is still unfortunate behavior in that lookups > of a pending inactive inode spin loop waiting for the pending > inactive state to clear, which won't happen until the fs is > unfrozen. This was always possible to some degree, but is > potentially amplified by the fact that reclaim no longer blocks on > the first inode that requires inactivation work. Instead, we > populate the inactivation queues indefinitely. The side effect can > be observed easily by invoking drop_caches on a frozen fs previously > populated with eofb and/or cowblocks inodes and then running > anything that relies on inode lookup (i.e., ls). > > To mitigate this behavior, invoke internal blockgc reclaim during > the freeze sequence to guarantee that inode eviction doesn't lead to > this state due to eofb or cowblocks inodes. This is similar to > current behavior on read-only remount. Since the deadlock issue was > present for such a long time, also document the subtle > ->destroy_inode() constraint to avoid unintentional reintroduction > of the deadlock problem in the future. Yay for improved documentation. :) > Signed-off-by: Brian Foster <bfoster@xxxxxxxxxx> > --- > fs/xfs/xfs_super.c | 19 +++++++++++++++++-- > 1 file changed, 17 insertions(+), 2 deletions(-) > > diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c > index c7ac486ca5d3..1d0f87e47fa4 100644 > --- a/fs/xfs/xfs_super.c > +++ b/fs/xfs/xfs_super.c > @@ -623,8 +623,13 @@ xfs_fs_alloc_inode( > } > > /* > - * Now that the generic code is guaranteed not to be accessing > - * the linux inode, we can inactivate and reclaim the inode. > + * Now that the generic code is guaranteed not to be accessing the inode, we can > + * inactivate and reclaim it. > + * > + * NOTE: ->destroy_inode() can be called (with ->s_umount held) while the > + * filesystem is frozen. Therefore it is generally unsafe to attempt transaction > + * allocation in this context. A transaction alloc that blocks on frozen state > + * from a context with ->s_umount held will deadlock with unfreeze. > */ > STATIC void > xfs_fs_destroy_inode( > @@ -764,6 +769,16 @@ xfs_fs_sync_fs( > * when the state is either SB_FREEZE_FS or SB_FREEZE_COMPLETE. > */ > if (sb->s_writers.frozen == SB_FREEZE_PAGEFAULT) { > + struct xfs_icwalk icw = {0}; > + > + /* > + * Clear out eofb and cowblocks inodes so eviction while frozen > + * doesn't leave them sitting in the inactivation queue where > + * they cannot be processed. Would you mind adding an explicit link in the comment between needing to get /all/ the inodes and _FLAG_SYNC? "We must process every cached inode, so this requires a synchronous cache scan." > + */ > + icw.icw_flags = XFS_ICWALK_FLAG_SYNC; > + xfs_blockgc_free_space(mp, &icw); This needs to check the return value, right? --D > + > xfs_inodegc_stop(mp); > xfs_blockgc_stop(mp); > } > -- > 2.31.1 >