Re: [PATCH 03/20] xfs: defer inode inactivation to a workqueue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 30, 2021 at 02:24:00PM +1000, Dave Chinner wrote:
> On Thu, Jul 29, 2021 at 11:44:10AM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <djwong@xxxxxxxxxx>
> > 
> > Instead of calling xfs_inactive directly from xfs_fs_destroy_inode,
> > defer the inactivation phase to a separate workqueue.  With this change,
> > we can speed up directory tree deletions by reducing the duration of
> > unlink() calls to the directory and unlinked list updates.
> > 
> > By moving the inactivation work to the background, we can reduce the
> > total cost of deleting a lot of files by performing the file deletions
> > in disk order instead of directory entry order, which can be arbitrary.
> > 
> > We introduce two new inode flags -- NEEDS_INACTIVE and INACTIVATING.
> > The first flag helps our worker find inodes needing inactivation, and
> > the second flag marks inodes that are in the process of being
> > inactivated.  A concurrent xfs_iget on the inode can still resurrect the
> > inode by clearing NEEDS_INACTIVE (or bailing if INACTIVATING is set).
> > 
> > Unfortunately, deferring the inactivation has one huge downside --
> > eventual consistency.  Since all the freeing is deferred to a worker
> > thread, one can rm a file but the space doesn't come back immediately.
> > This can cause some odd side effects with quota accounting and statfs,
> > so we flush inactivation work during syncfs in order to maintain the
> > existing behaviors, at least for callers that unlink() and sync().
> > 
> > For this patch we'll set the delay to zero to mimic the old timing as
> > much as possible; in the next patch we'll play with different delay
> > settings.
> > 
> > Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx>
> .....
> > +
> > +/* Disable the inode inactivation background worker and wait for it to stop. */
> > +void
> > +xfs_inodegc_stop(
> > +	struct xfs_mount	*mp)
> > +{
> > +	if (!test_and_clear_bit(XFS_OPFLAG_INODEGC_RUNNING_BIT, &mp->m_opflags))
> > +		return;
> > +
> > +	cancel_delayed_work_sync(&mp->m_inodegc_work);
> > +	trace_xfs_inodegc_stop(mp, __return_address);
> > +}
> 
> FWIW, this introduces a new mount field that does the same thing as the
> m_opstate field I added in my feature flag cleanup series (i.e.
> atomic operational state changes).  Personally I much prefer my
> opstate stuff because this is state, not flags, and the namespace is
> much less verbose...

Yes, well, is that ready to go?  Like, right /now/?  I already bolted
the quotaoff scrapping patchset on the front, after reworking the ENOSPC
retry loops and reworking quota apis before that...

> THere's also conflicts all over the place because of that. All the
> RO checks are busted,

Can we focus on /this/ patchset, then?  What specifically is broken
about the ro checking in it?

And since the shrinkers are always a source of amusement, what /is/ up
with it?  I don't really like having to feed it magic numbers just to
get it to do what I want, which is ... let it free some memory in the
first round, then we'll kick the background workers when the priority
bumps (er, decreases), and hope that's enough not to OOM the box.

--D

> lots of the quota mods in your tree conflict
> with the sb_version_hasfeat -> has_feat conversion, etc.
> 
> We're going to have to reconcile this at some point soon...
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux