On Tue, May 24, 2022 at 01:47:36PM +0300, Amir Goldstein wrote: > On Tue, May 24, 2022 at 1:37 PM Dave Chinner <david@xxxxxxxxxxxxx> wrote: > > > > From: Dave Chinner <dchinner@xxxxxxxxxx> > > > > The current blocking mechanism for pushing the inodegc queue out to > > disk can result in systems becoming unusable when there is a long > > running inodegc operation. This is because the statfs() > > implementation currently issues a blocking flush of the inodegc > > queue and a significant number of common system utilities will call > > statfs() to discover something about the underlying filesystem. > > > > This can result in userspace operations getting stuck on inodegc > > progress, and when trying to remove a heavily reflinked file on slow > > storage with a full journal, this can result in delays measuring in > > hours. > > > > Avoid this problem by adding "push" function that expedites the > > flushing of the inodegc queue, but doesn't wait for it to complete. > > > > Convert xfs_fs_statfs() to use this mechanism so it doesn't block > > but it does ensure that queued operations are expedited. > > > > Fixes: ab23a7768739 ("xfs: per-cpu deferred inode inactivation queues") > > Reported-by: Chris Dunlop <chris@xxxxxxxxxxxx> > > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> > > --- > > fs/xfs/xfs_icache.c | 20 +++++++++++++++----- > > fs/xfs/xfs_icache.h | 1 + > > fs/xfs/xfs_super.c | 7 +++++-- > > fs/xfs/xfs_trace.h | 1 + > > 4 files changed, 22 insertions(+), 7 deletions(-) > > > > diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c > > index 786702273621..2609825d53ee 100644 > > --- a/fs/xfs/xfs_icache.c > > +++ b/fs/xfs/xfs_icache.c > > @@ -1862,19 +1862,29 @@ xfs_inodegc_worker( > > } > > > > /* > > - * Force all currently queued inode inactivation work to run immediately and > > - * wait for the work to finish. > > + * Expedite all pending inodegc work to run immediately. This does not wait for > > + * completion of the work. > > */ > > void > > -xfs_inodegc_flush( > > +xfs_inodegc_push( > > struct xfs_mount *mp) > > { > > if (!xfs_is_inodegc_enabled(mp)) > > return; > > + trace_xfs_inodegc_push(mp, __return_address); > > + xfs_inodegc_queue_all(mp); > > +} > > > > +/* > > + * Force all currently queued inode inactivation work to run immediately and > > + * wait for the work to finish. > > + */ > > +void > > +xfs_inodegc_flush( > > + struct xfs_mount *mp) > > +{ > > + xfs_inodegc_push(mp); > > trace_xfs_inodegc_flush(mp, __return_address); > > Unintentional(?) change of behavior: > trace_xfs_inodegc_flush() will be called in > (!xfs_is_inodegc_enabled(mp)) case. At worst we end up waiting for any inodegc workers that are still running, right? I think that's reasonable behavior for a flush function, and shouldn't cause any weird interactions. > I also wonder if trace_xfs_inodegc_flush() > should not be before trace_xfs_inodegc_push() in this flow, > but this is just a matter of tracing conventions and you should > know best how it will be convenient for xfs developers to be > reading the trace events stream. Why? _push has its own tracepoint which we can use to tell if inodegc was enabled at _flush time. --D > Thanks, > Amir.