Hi all, This patch series implements deferred inode inactivation. Inactivation is the process of updating all on-disk metadata when a file is deleted -- freeing the data/attr/COW fork extent allocations, removing the inode from the unlinked hash, marking the inode record itself free, and updating the inode btrees so that they show the inode as not being in use. Currently, all this inactivation is performed during in-core inode reclaim, which creates two big headaches: first, this makes direct memory reclamation /really/ slow, and second, it prohibits us from partially freezing the filesystem for online fsck activity because scrub can hit direct memory reclaim. It's ok for scrub to fail with ENOMEM, but it's not ok for scrub to deadlock memory reclaim. :) The implementation will be familiar to those who have studied how XFS scans for reclaimable in-core inodes -- we create a couple more inode state flags to mark an inode as needing inactivation and being in the middle of inactivation. When inodes need inactivation, we set iflags, set the RECLAIM radix tree tag, update a count of how many resources will be freed by the pending inactivations, and schedule a deferred work item. The deferred work item scans the inode radix tree for inodes to inactivate, and does all the on-disk metadata updates. Once the inode has been inactivated, it is left in the reclaim state and the background reclaim worker (or direct reclaim) will get to it eventually. Patch 1-2 refactor some of the inactivation predicates. Patches 3-4 implement the count of blocks/quota that can be freed by running inactivation; this is necessary to preserve the behavior where you rm a file and the fs counters update immediately. Patches 5-6 refactor more inode reclaim code so that we can reuse some of it for inactivation. Patch 8 delivers the core of the inactivation changes by altering the inode lifetime state machine to include the new inode flags and background workers. Patches 9-10 makes it so that if an allocation attempt hits ENOSPC it will force inactivation to free resources and try again. Patch 11 converts the per-fs inactivation scanner to be tracked on a per-AG basis so that we can be more targeted in our inactivation. Patches 12-14 teach the per-AG sick status to remember if we inactivate inodes that themselves had unfixed sick flags set, and for scrub to clear all those flags if it finds that the filesystem is clean. If you're going to start using this mess, you probably ought to just pull from my git trees, which are linked below. This is an extraordinary way to destroy everything. Enjoy! Comments and questions are, as always, welcome. --D kernel git tree: https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git/log/?h=deferred-inactivation --- fs/xfs/scrub/common.c | 2 fs/xfs/scrub/quotacheck.c | 7 + fs/xfs/xfs_bmap_util.c | 38 +++ fs/xfs/xfs_fsops.c | 9 + fs/xfs/xfs_icache.c | 555 ++++++++++++++++++++++++++++++++++++++++++++- fs/xfs/xfs_icache.h | 13 + fs/xfs/xfs_inode.c | 102 ++++++++ fs/xfs/xfs_inode.h | 15 + fs/xfs/xfs_iomap.c | 14 + fs/xfs/xfs_log_recover.c | 7 + fs/xfs/xfs_mount.c | 23 ++ fs/xfs/xfs_mount.h | 12 + fs/xfs/xfs_qm.c | 29 ++ fs/xfs/xfs_qm.h | 17 + fs/xfs/xfs_qm_syscalls.c | 20 ++ fs/xfs/xfs_super.c | 63 ++++- fs/xfs/xfs_trace.h | 15 + 17 files changed, 909 insertions(+), 32 deletions(-)