Re: [PATCH 02/11] xfs: don't stall cowblocks scan if we can't take locks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 25, 2021 at 01:14:06PM -0500, Brian Foster wrote:
> On Sat, Jan 23, 2021 at 10:52:10AM -0800, Darrick J. Wong wrote:
> > From: Darrick J. Wong <djwong@xxxxxxxxxx>
> > 
> > Don't stall the cowblocks scan on a locked inode if we possibly can.
> > We'd much rather the background scanner keep moving.
> > 
> > Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx>
> > Reviewed-by: Christoph Hellwig <hch@xxxxxx>
> > ---
> >  fs/xfs/xfs_icache.c |   21 ++++++++++++++++++---
> >  1 file changed, 18 insertions(+), 3 deletions(-)
> > 
> > 
> > diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> > index c71eb15e3835..89f9e692fde7 100644
> > --- a/fs/xfs/xfs_icache.c
> > +++ b/fs/xfs/xfs_icache.c
> > @@ -1605,17 +1605,31 @@ xfs_inode_free_cowblocks(
> >  	void			*args)
> >  {
> >  	struct xfs_eofblocks	*eofb = args;
> > +	bool			wait;
> >  	int			ret = 0;
> >  
> > +	wait = eofb && (eofb->eof_flags & XFS_EOF_FLAGS_SYNC);
> > +
> >  	if (!xfs_prep_free_cowblocks(ip))
> >  		return 0;
> >  
> >  	if (!xfs_inode_matches_eofb(ip, eofb))
> >  		return 0;
> >  
> > -	/* Free the CoW blocks */
> > -	xfs_ilock(ip, XFS_IOLOCK_EXCL);
> > -	xfs_ilock(ip, XFS_MMAPLOCK_EXCL);
> > +	/*
> > +	 * If the caller is waiting, return -EAGAIN to keep the background
> > +	 * scanner moving and revisit the inode in a subsequent pass.
> > +	 */
> > +	if (!xfs_ilock_nowait(ip, XFS_IOLOCK_EXCL)) {
> > +		if (wait)
> > +			return -EAGAIN;
> > +		return 0;
> > +	}
> > +	if (!xfs_ilock_nowait(ip, XFS_MMAPLOCK_EXCL)) {
> > +		if (wait)
> > +			ret = -EAGAIN;
> > +		goto out_iolock;
> > +	}
> 
> Hmm.. I'd be a little concerned over this allowing a scan to repeat
> indefinitely with a competing workload because a restart doesn't carry
> over any state from the previous scan. I suppose the
> xfs_prep_free_cowblocks() checks make that slightly less likely on a
> given file, but I more wonder about a scenario with a large set of
> inodes in a particular AG with a sufficient amount of concurrent
> activity. All it takes is one trylock failure per scan to have to start
> the whole thing over again... hm?

I'm not quite sure what to do here -- xfs_inode_free_eofblocks already
has the ability to return EAGAIN, which (I think) means that it's
already possible for the low-quota scan to stall indefinitely if the
scan can't lock the inode.

I think we already had a stall limiting factor here in that all the
other threads in the system that hit EDQUOT will drop their IOLOCKs to
scan the fs, which means that while they loop around the scanner they
can only be releasing quota and driving us towards having fewer inodes
with the same dquots and either blockgc tag set.

--D

> Brian
> 
> >  
> >  	/*
> >  	 * Check again, nobody else should be able to dirty blocks or change
> > @@ -1625,6 +1639,7 @@ xfs_inode_free_cowblocks(
> >  		ret = xfs_reflink_cancel_cow_range(ip, 0, NULLFILEOFF, false);
> >  
> >  	xfs_iunlock(ip, XFS_MMAPLOCK_EXCL);
> > +out_iolock:
> >  	xfs_iunlock(ip, XFS_IOLOCK_EXCL);
> >  
> >  	return ret;
> > 
> 



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux