On Wed, Jul 01, 2020 at 10:30:57AM -0400, Brian Foster wrote: > On Tue, Jun 23, 2020 at 07:50:13PM +1000, Dave Chinner wrote: > > From: Dave Chinner <dchinner@xxxxxxxxxx> > > > > For inodes that are dirty, we have an attached cluster buffer that > > we want to use to track the dirty inode through the AIL. > > Unfortunately, locking the cluster buffer and adding it to the > > transaction when the inode is first logged in a transaction leads to > > buffer lock ordering inversions. > > > > The specific problem is ordering against the AGI buffer. When > > modifying unlinked lists, the buffer lock order is AGI -> inode > > cluster buffer as the AGI buffer lock serialises all access to the > > unlinked lists. Unfortunately, functionality like xfs_droplink() > > logs the inode before calling xfs_iunlink(), as do various directory > > manipulation functions. The inode can be logged way down in the > > stack as far as the bmapi routines and hence, without a major > > rewrite of lots of APIs there's no way we can avoid the inode being > > logged by something until after the AGI has been logged. > > > > As we are going to be using ordered buffers for inode AIL tracking, > > there isn't a need to actually lock that buffer against modification > > as all the modifications are captured by logging the inode item > > itself. Hence we don't actually need to join the cluster buffer into > > the transaction until just before it is committed. This means we do > > not perturb any of the existing buffer lock orders in transactions, > > and the inode cluster buffer is always locked last in a transaction > > that doesn't otherwise touch inode cluster buffers. > > > > We do this by introducing a precommit log item method. A log item > > method is used because it is likely dquots will be moved to this > > same ordered buffer tracking scheme and hence will need a similar > > callout. This commit just introduces the mechanism; the inode item > > implementation is in followup commits. > > > > The precommit items need to be sorted into consistent order as we > > may be locking multiple items here. Hence if we have two dirty > > inodes in cluster buffers A and B, and some other transaction has > > two separate dirty inodes in the same cluster buffers, locking them > > in different orders opens us up to ABBA deadlocks. Hence we sort the > > items on the transaction based on the presence of a sort log item > > method. > > > > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> > > --- > > Seems like a nice abstraction, particularly when you consider the other > use cases you described that should fall into place over time. A couple > minor comments.. > > > fs/xfs/xfs_icache.c | 1 + > > fs/xfs/xfs_trans.c | 90 +++++++++++++++++++++++++++++++++++++++++++++ > > fs/xfs/xfs_trans.h | 6 ++- > > 3 files changed, 95 insertions(+), 2 deletions(-) > > > ... > > diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c > > index 3c94e5ff4316..6f350490f84b 100644 > > --- a/fs/xfs/xfs_trans.c > > +++ b/fs/xfs/xfs_trans.c > > @@ -799,6 +799,89 @@ xfs_trans_committed_bulk( > > spin_unlock(&ailp->ail_lock); > > } > > > > +/* > > + * Sort transaction items prior to running precommit operations. This will > > + * attempt to order the items such that they will always be locked in the same > > + * order. Items that have no sort function are moved to the end of the list > > + * and so are locked last (XXX: need to check the logic matches the comment). > > + * > > Heh, I was going to ask what the expected behavior was with the various > !iop_sort() cases and whether we can really expect those items to be > isolated at the end of the list. > > > + * This may need refinement as different types of objects add sort functions. > > + * > > + * Function is more complex than it needs to be because we are comparing 64 bit > > + * values and the function only returns 32 bit values. > > + */ > > +static int > > +xfs_trans_precommit_sort( > > + void *unused_arg, > > + struct list_head *a, > > + struct list_head *b) > > +{ > > + struct xfs_log_item *lia = container_of(a, > > + struct xfs_log_item, li_trans); > > + struct xfs_log_item *lib = container_of(b, > > + struct xfs_log_item, li_trans); > > + int64_t diff; > > + > > + if (!lia->li_ops->iop_sort && !lib->li_ops->iop_sort) > > + return 0; > > + if (!lia->li_ops->iop_sort) > > + return 1; > > + if (!lib->li_ops->iop_sort) > > + return -1; > > I'm a little confused on what these values are supposed to mean if one > of the two items is non-sortable. Is the purpose of this simply to move > sortable items to the head and non-sortable toward the tail, as noted > above? If the log item doesn't have a sort function, it implies the object is already locked and modified and there's no pre-commit operation going to be performed on it. In that case, I decided to move them to the tail of the list so that it would be easier to verify that the items that need sorting were, indeed, sorted into the correct order. The choice was arbitrary - the could be moved to the head of the list or they could be left where they are any everything else is ordered around them, but I went for the behaviour that it easy to verify visually with debug output or via a list walk in a debugger... > > +static int > > +xfs_trans_run_precommits( > > + struct xfs_trans *tp) > > +{ > > + struct xfs_mount *mp = tp->t_mountp; > > + struct xfs_log_item *lip, *n; > > + int error = 0; > > + > > + if (XFS_FORCED_SHUTDOWN(mp)) > > + return -EIO; > > + > > I'd rather not change behavior here. This effectively overrides the > shutdown check in the caller because we get here regardless of whether > the transaction has any pre-commit callouts or not. It seems like this > is unnecessary, at least for the time being, if the precommit is > primarily focused on sorting. I put that there because if we are already shut down then there's no point in even sorting or running pre-commits - they are going to error out trying to access the objects they need to modify anyway. It really isn't critical, just seemed superfluous to run code that we already know will be cancelled and/or error out... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx