[RFC PATCH v2] xfs: run blockgc on freeze to avoid iget stalls after reclaim

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We've had reports on distro (pre-deferred inactivation) kernels that
inode reclaim (i.e. via drop_caches) can deadlock on the s_umount
lock when invoked on a frozen XFS fs. This occurs because
drop_caches acquires the lock and then blocks in xfs_inactive() on
transaction alloc for an inode that requires an eofb trim. unfreeze
then blocks on the same lock and the fs is deadlocked.

With deferred inactivation, the deadlock problem is no longer
present because ->destroy_inode() no longer blocks whether the fs is
frozen or not. There is still unfortunate behavior in that lookups
of a pending inactive inode spin loop waiting for the pending
inactive state to clear, which won't happen until the fs is
unfrozen. This was always possible to some degree, but is
potentially amplified by the fact that reclaim no longer blocks on
the first inode that requires inactivation work. Instead, we
populate the inactivation queues indefinitely. The side effect can
be observed easily by invoking drop_caches on a frozen fs previously
populated with eofb and/or cowblocks inodes and then running
anything that relies on inode lookup (i.e., ls).

To mitigate this behavior, invoke a non-sync blockgc scan during the
freeze sequence to minimize the chance that inode evictions require
inactivation while the fs is frozen. A synchronous scan would
provide more of a guarantee, but is potentially unsafe from
partially frozen context. This is because a file read task may be
blocked on a write fault while holding iolock (such as when reading
into a mapped buffer) and a sync scan retries indefinitely on iolock
failure. Therefore, this adds risk of potential livelock during the
freeze sequence.

Finally, since the deadlock issue was present for such a long time,
also document the subtle ->destroy_inode() constraint to avoid
unintentional reintroduction of the deadlock problem in the future.

Signed-off-by: Brian Foster <bfoster@xxxxxxxxxx>
---

Hi all,

There was a good amount of discussion on the first version of this patch
[1] a couple or so years ago now. The main issue was that using a sync
scan is unsafe in certain cases (best described here [2]), so this
best-effort approach was considered as a fallback option to improve
behavior.

The reason I'm reposting this is that it is one of several options for
dealing with the aforementioned deadlock on stable/distro kernels, so it
seems to have mutual benefit. Looking back through the original
discussion, I think there are several ways this could be improved to
provide the benefit of a sync scan. For example, if the scan could be
made to run before faults are locked out (re [3]), that may be
sufficient to allow a sync scan. Or now that freeze_super() actually
checks for ->sync_fs() errors, an async scan could be followed by a
check for tagged blockgc entries that triggers an -EBUSY or some error
return to fail the freeze, which would most likely be a rare and
transient situation. Etc.

These thoughts are mainly incremental improvements upon some form of
freeze time scan and may not be of significant additional value given
current upstream behavior, so this patch takes the simple, best effort
approach. Thoughts?

Brian

[1] https://lore.kernel.org/linux-xfs/20220113133701.629593-1-bfoster@xxxxxxxxxx/
[2] https://lore.kernel.org/linux-xfs/20220115224030.GA59729@xxxxxxxxxxxxxxxxxxx/
[3] https://lore.kernel.org/linux-xfs/Yehvc4g+WakcG1mP@bfoster/

 fs/xfs/xfs_super.c | 24 ++++++++++++++++--------
 1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
index d0009430a627..43e72e266666 100644
--- a/fs/xfs/xfs_super.c
+++ b/fs/xfs/xfs_super.c
@@ -657,8 +657,13 @@ xfs_fs_alloc_inode(
 }
 
 /*
- * Now that the generic code is guaranteed not to be accessing
- * the linux inode, we can inactivate and reclaim the inode.
+ * Now that the generic code is guaranteed not to be accessing the inode, we can
+ * inactivate and reclaim it.
+ *
+ * NOTE: ->destroy_inode() can be called (with ->s_umount held) while the
+ * filesystem is frozen. Therefore it is generally unsafe to attempt transaction
+ * allocation in this context. A transaction alloc that blocks on frozen state
+ * from a context with ->s_umount held will deadlock with unfreeze.
  */
 STATIC void
 xfs_fs_destroy_inode(
@@ -811,15 +816,18 @@ xfs_fs_sync_fs(
 	 * down inodegc because once SB_FREEZE_FS is set it's too late to
 	 * prevent inactivation races with freeze. The fs doesn't get called
 	 * again by the freezing process until after SB_FREEZE_FS has been set,
-	 * so it's now or never.  Same logic applies to speculative allocation
-	 * garbage collection.
+	 * so it's now or never.
 	 *
-	 * We don't care if this is a normal syncfs call that does this or
-	 * freeze that does this - we can run this multiple times without issue
-	 * and we won't race with a restart because a restart can only occur
-	 * when the state is either SB_FREEZE_FS or SB_FREEZE_COMPLETE.
+	 * The same logic applies to block garbage collection. Run a best-effort
+	 * blockgc scan to reduce the working set of inodes that the shrinker
+	 * would send to inactivation queue purgatory while frozen. We can't run
+	 * a sync scan with page faults blocked because that could potentially
+	 * livelock against a read task blocked on a page fault (i.e. if reading
+	 * into a mapped buffer) while holding iolock.
 	 */
 	if (sb->s_writers.frozen == SB_FREEZE_PAGEFAULT) {
+		xfs_blockgc_free_space(mp, NULL);
+
 		xfs_inodegc_stop(mp);
 		xfs_blockgc_stop(mp);
 	}
-- 
2.42.0





[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux