Re: [PATCH 03/20] xfs: defer inode inactivation to a workqueue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 30, 2021 at 09:21:12PM -0700, Darrick J. Wong wrote:
> On Fri, Jul 30, 2021 at 02:24:00PM +1000, Dave Chinner wrote:
> > On Thu, Jul 29, 2021 at 11:44:10AM -0700, Darrick J. Wong wrote:
> > > From: Darrick J. Wong <djwong@xxxxxxxxxx>
> > > 
> > > Instead of calling xfs_inactive directly from xfs_fs_destroy_inode,
> > > defer the inactivation phase to a separate workqueue.  With this change,
> > > we can speed up directory tree deletions by reducing the duration of
> > > unlink() calls to the directory and unlinked list updates.
> > > 
> > > By moving the inactivation work to the background, we can reduce the
> > > total cost of deleting a lot of files by performing the file deletions
> > > in disk order instead of directory entry order, which can be arbitrary.
> > > 
> > > We introduce two new inode flags -- NEEDS_INACTIVE and INACTIVATING.
> > > The first flag helps our worker find inodes needing inactivation, and
> > > the second flag marks inodes that are in the process of being
> > > inactivated.  A concurrent xfs_iget on the inode can still resurrect the
> > > inode by clearing NEEDS_INACTIVE (or bailing if INACTIVATING is set).
> > > 
> > > Unfortunately, deferring the inactivation has one huge downside --
> > > eventual consistency.  Since all the freeing is deferred to a worker
> > > thread, one can rm a file but the space doesn't come back immediately.
> > > This can cause some odd side effects with quota accounting and statfs,
> > > so we flush inactivation work during syncfs in order to maintain the
> > > existing behaviors, at least for callers that unlink() and sync().
> > > 
> > > For this patch we'll set the delay to zero to mimic the old timing as
> > > much as possible; in the next patch we'll play with different delay
> > > settings.
> > > 
> > > Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx>
> > .....
> > > +
> > > +/* Disable the inode inactivation background worker and wait for it to stop. */
> > > +void
> > > +xfs_inodegc_stop(
> > > +	struct xfs_mount	*mp)
> > > +{
> > > +	if (!test_and_clear_bit(XFS_OPFLAG_INODEGC_RUNNING_BIT, &mp->m_opflags))
> > > +		return;
> > > +
> > > +	cancel_delayed_work_sync(&mp->m_inodegc_work);
> > > +	trace_xfs_inodegc_stop(mp, __return_address);
> > > +}
> > 
> > FWIW, this introduces a new mount field that does the same thing as the
> > m_opstate field I added in my feature flag cleanup series (i.e.
> > atomic operational state changes).  Personally I much prefer my
> > opstate stuff because this is state, not flags, and the namespace is
> > much less verbose...
> 
> Yes, well, is that ready to go?  Like, right /now/?  I already bolted
> the quotaoff scrapping patchset on the front, after reworking the ENOSPC
> retry loops and reworking quota apis before that...

Should be - that's why it's in my patch stack getting tested. But I
wasn't suggesting that you need to put it in first, just trying to
give you the heads up that there's a substantial conflict between
that and this patchset.

> > THere's also conflicts all over the place because of that. All the
> > RO checks are busted,
> 
> Can we focus on /this/ patchset, then?  What specifically is broken
> about the ro checking in it?

Sory, I wasn't particularly clear about that. What I meant was that
stuff like all the new RO and shutdown checks in this patchset don't
give conflicts but instead cause compilation failures. So the merge
isn't just a case of fixing conflicts, the code doesn't compile
(i.e. it is busted) after fixing all the reported merge conflicts.

> And since the shrinkers are always a source of amusement, what /is/ up
> with it?  I don't really like having to feed it magic numbers just to
> get it to do what I want, which is ... let it free some memory in the
> first round, then we'll kick the background workers when the priority
> bumps (er, decreases), and hope that's enough not to OOM the box.

Well, the shrinkers are intended as a one-shot memory pressure
notification as you are trying to use them. They are intended to be
told the amount of work that needs to be done to free memory, and
they they calculate how much of that work should be done based on
it's idea of the current level of memory pressure.

One shot shrinker triggers never tend to work well because they
treat all memory pressure the same - very light memory pressure is
dead with by the same big hammer than deals with OOM levels of
memory pressure.

As it is, I'm more concerned right now with finding out why there's
such large performance regressions in highly concurrent recursive
chmod/unlink workloads. I spend most of friday looking at it trying
to work out what behaviour was causing the regression, but I haven't
isolated it yet. I suspect that it is lockstepping between user
processes and background inactivation for the unlink - I'm seeing
the backlink rhashtable show up in the profiles which indicates the
unlinked list lengths are an issue and we're lockstepping the AGI.
It also may simply be that there is too much parallelism hammering
the transaction subsystem now....

IOWs, I'm basically going to have to pull this apart patch by patch
to tease out where the behaviours go wrong and see if there are ways
to avoid and mitigate those behaviours.  Hence I haven't even got to
the shrinker/oom considerations yet; there's a bigger performance
issue that needs to be understood first. It may be that they are
related, but right now we need to know why recursive chmod is
saw-toothing (it's not a lack of log space!) and concurrent unlinks
throughput has dropped by half...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux