Re: [PATCH 06/11] xfs: flush eof/cowblocks if we can't reserve quota for file blocks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 25, 2021 at 01:16:23PM -0500, Brian Foster wrote:
> On Sun, Jan 24, 2021 at 09:39:53AM +0000, Christoph Hellwig wrote:
> > > +	/* We only allow one retry for EDQUOT/ENOSPC. */
> > > +	if (*retry || (error != -EDQUOT && error != -ENOSPC)) {
> > > +		*retry = false;
> > > +		return error;
> > > +	}
> > 
> > > +	/* Release resources, prepare for scan. */
> > > +	xfs_trans_cancel(*tpp);
> > > +	*tpp = NULL;
> > > +	xfs_iunlock(ip, XFS_ILOCK_EXCL);
> > > +
> > > +	/* Try to free some quota for this file's dquots. */
> > > +	*retry = true;
> > > +	xfs_blockgc_free_quota(ip, 0);
> > > +	return 0;
> > 
> > I till have grave reservations about this calling conventions.  And if
> > you just remove the unlock and th call to xfs_blockgc_free_quota here
> > we don't equire a whole lot of boilerplate code in the callers while
> > making the code possible to reason about for a mere human.
> > 
> 
> I agree that the retry pattern is rather odd. I'm curious, is there a
> specific reason this scanning task has to execute outside of transaction
> context in the first place?

Dave didn't like the open-coded retry and told me to shrink the call
sites to:

	error = xfs_trans_reserve_quota(...);
	if (error)
		goto out_trans_cancel;
	if (quota_retry)
		goto retry;

So here we are, slowly putting things almost all the way back to where
they were originally.  Now I have a little utility function:

/*
 * Cancel a transaction and try to clear some space so that we can
 * reserve some quota.  The caller must hold the ILOCK; when this
 * function returns, the transaction will be cancelled and the ILOCK
 * will have been released.
 */
int
xfs_trans_cancel_qretry(
	struct xfs_trans	*tp,
	struct xfs_inode	*ip)
{
	ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));

	xfs_trans_cancel(tp);
	xfs_iunlock(ip, XFS_ILOCK_EXCL);

	return xfs_blockgc_free_quota(ip, 0);
}

Which I guess reduces the amount of call site boilerplate from 4 lines
to two, only now I've spent half of last week on this.

> Assuming it does because the underlying work
> may involve more transactions or whatnot, I'm wondering if this logic
> could be buried further down in the transaction allocation path.
> 
> For example, if we passed the quota reservation and inode down into a
> new variant of xfs_trans_alloc(), it could acquire the ilock and attempt
> the quota reservation as a final step (to avoid adding an extra
> unconditional ilock cycle). If quota res fails, iunlock and release the
> log res internally and perform the scan. From there, perhaps we could
> retry the quota reservation immediately without logres or the ilock by
> saving references to the dquots, and then only reacquire logres/ilock on
> success..? Just thinking out loud so that might require further
> thought...

Yes, that's certainly possible, and probably a good design goal to have
a xfs_trans_alloc_quota(tres, ip, whichfork, nblks, &tp) that one could
call to reserve a transaction, lock the inode, and reserve the
appropriate amounts of quota to handle mapping nblks into an inode fork.

However, there are complications that don't make this a trivial switch:

1. Reflink and (new) swapext don't actually know how many blocks they
need to reserve until after they've grabbed the two ILOCKs, which means
that the wrapper is of no use here.

2. For the remaining quota reservation callsites, you have to deal with
the bmap code that computes qblocks for reservation against the realtime
device.  This is opening a huge can of worms because:

3. Realtime and quota are not supported, which means that none of that
code ever gets properly QA'd.  It would be totally stupid to rework most
of the quota reservation callsites and still leave that logic bomb.
This gigantic piece of technical debt needs to be paid off, either by
fixing the functionality and getting it under test, or by dropping rt
quota support completely and officially.

My guess is that fixing rt quota is probably going to take 10-15
patches, and doing more small cleanups to convert the callsites will be
another 10 or so.

4. We're already past -rc5, and what started as two cleanup patchsets of
13 is now four patchsets of 27 patches, and I /really/ would just like
to get these patches merged without expanding the scope of work even
further.

--D

> Brian
> 



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux