Re: [PATCH 7/7] xfs: push the grant head when the log head moves forward

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 05, 2019 at 12:25:33PM -0400, Brian Foster wrote:
> On Thu, Sep 05, 2019 at 08:50:56AM +1000, Dave Chinner wrote:
> > On Wed, Sep 04, 2019 at 03:34:42PM -0400, Brian Foster wrote:
> > > On Wed, Sep 04, 2019 at 02:24:51PM +1000, Dave Chinner wrote:
> > > > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > > > +/*
> > > > + * Completion of a iclog IO does not imply that a transaction has completed, as
> > > > + * transactions can be large enough to span many iclogs. We cannot change the
> > > > + * tail of the log half way through a transaction as this may be the only
> > > > + * transaction in the log and moving the tail to point to the middle of it
> > > > + * will prevent recovery from finding the start of the transaction. Hence we
> > > > + * should only update the last_sync_lsn if this iclog contains transaction
> > > > + * completion callbacks on it.
> > > > + *
> > > > + * We have to do this before we drop the icloglock to ensure we are the only one
> > > > + * that can update it.
> > > > + *
> > > > + * If we are moving the last_sync_lsn forwards, we also need to ensure we kick
> > > > + * the reservation grant head pushing. This is due to the fact that the push
> > > > + * target is bound by the current last_sync_lsn value. Hence if we have a large
> > > > + * amount of log space bound up in this committing transaction then the
> > > > + * last_sync_lsn value may be the limiting factor preventing tail pushing from
> > > > + * freeing space in the log. Hence once we've updated the last_sync_lsn we
> > > > + * should push the AIL to ensure the push target (and hence the grant head) is
> > > > + * no longer bound by the old log head location and can move forwards and make
> > > > + * progress again.
> > > > + */
> > > > +static void
> > > > +xlog_state_set_callback(
> > > > +	struct xlog		*log,
> > > > +	struct xlog_in_core	*iclog,
> > > > +	xfs_lsn_t		header_lsn)
> > > > +{
> > > > +	iclog->ic_state = XLOG_STATE_CALLBACK;
> > > > +
> > > > +	ASSERT(XFS_LSN_CMP(atomic64_read(&log->l_last_sync_lsn), header_lsn) <= 0);
> > > > +
> > > > +	if (list_empty_careful(&iclog->ic_callbacks))
> > > > +		return;
> > > > +
> > > > +	atomic64_set(&log->l_last_sync_lsn, header_lsn);
> > > > +	xlog_grant_push_ail(log, 0);
> > > > +
> > > 
> > > Nit: extra whitespace line above.
> > 
> > Fixed.
> > 
> > > This still seems racy to me, FWIW. What if the AIL is empty (i.e. the
> > > push is skipped)?
> > 
> > If the AIL is empty, then it's a no-op because pushing on the AIL is
> > not going to make more log space become free. Besides, that's not
> > the problem being solved here - reservation wakeups on first insert
> > into the AIL are already handled by xfs_trans_ail_update_bulk() and
> > hence the first patch in the series. This patch is addressing the
> 
> Nothing currently wakes up reservation waiters on first AIL insertion.

Nor should it be necessary - it's the removal from the AIL that
frees up log space, not insertion. The update operation is a
remove followed by an insert - the remove part of that operation is
what may free up log space, not the insert.

Hence if we need to wake the log reservation waiters on first AIL
insert to fix a bug, we haven't found the underlying problem is
preventing log space from being freed...
>
> > i.e. the AIL is for moving the tail of the log - this code moves the
> > head of the log. But both impact on the AIL push target (it is based on
> > the distance between the head and tail), so we need
> > to update the push target just in case this commit does not move
> > the tail...
> > 
> > > What if xfsaild completes this push before the
> > > associated log items land in the AIL or we race with xfsaild emptying
> > > the AIL? Why not just reuse/update the existing grant head wake up logic
> > > in the iclog callback itself? E.g., something like the following
> > > (untested):
> > > 
> 
> And the raciness concerns..? AFAICT this still opens a race window where
> the AIL can idle on the push target before AIL insertion.

I don't know what race you see - if the AIL completes a push before
we insert new objects at the head from the current commit, then it
does not matter one bit because the items are being inserted at the
log head, not the log tail where the pushing occurs at. If we are
inserting objects into the AIL within the push target window, then
there is something else very wrong going on, because when the log is
out of space the push target should be nowhere near the LSN we are
inserting inew objects into the AIL at. (i.e. they should be 3/4s of
the log apart...)

> > So, from #1 we see that unconditional wakeups are not necessary in
> > the scenario you pose, and from #2 it's not a viable solution even
> > if it was required.
> > 
> > However, #1 indicates other problems if a xfs_log_space_wake() call
> > is necessary in this case. No reservations space and an empty AIL
> > implies that the CIL pins the entire log - a pending commit that
> > hasn't finished flushing and the current context that is
> > aggregating. This implies we've violated a much more important rule
> > of the on-disk log format: finding the head and tail of the log
> > requires no individual commit be larger than 50% of the log.
> > 
> 
> I described this exact problem days ago in the original thread. There's
> no need to rehash it here. FWIW, I can reproduce much worse than 50% log
> consumption aggregated outside of the AIL with the current code and it
> doesn't depend on a nonpreemptible kernel (though the workqueue fix
> looks legit to me).

I'm not rehashing anything intentionally - I'm responding to the
questions you are asking me directly in this thread. Maybe I am
going over something you've already mentioned in a previous thread,
and maybe that hasn't occurred to me because you didn't reference it
and the similarites didn't occur to me because I've spend more time
looking at the code trying to understand how this "impossible
situation" was occurring than reading mailing list discussions.

I've been certain that we were seeing was some fundamental rule was
being violated to cause this "log full, AIL empty", but I couldn't
work out exactly what it was. I was even questioning whether I
understood the basic operation of the log because there was no way I
could see that CIL would not push during log recovery until the log
was full.  I said this to Darrick yesterday morning on #xfs:

[5/9/19 12:56] <dchinner> there's something bothering me about this
log head update thing and I can't put my finger on what it is....

It wasn't until Chandan's trace showed me the CPU hold-off problem
with the CIL workqueue. A couple of hours later, after I'd seen
Chandan's trace:

[5/9/19 14:26] <dchinner> oooohhhh
[5/9/19 14:27] <dchinner> this isn't a premeptible kernel, is it?

And that was the thing that I couldn't put my finger on - I couldn't
work out how a CIL push was being delayed so long on a multi-cpu
system with lots of idle CPU that we had a completely empty AIL when
we ran out of reservation space.  IOWs, I didn't know the right
question to ask until I saw the answer in front of me.

I've never seen a "CIL checkpoint too large" issue manifiest in the
real world, but it's been there since delayed logging was
introduced. I knew about this issue right from the start, but it was
largely a theoretical concern because workqueue scheduling preempts
userspace and so is mostly only ever delayed by the number of
transactions in a single syscall. And for large, ongoing
transactions like a truncate, it will yield the moment we have to
pull in metadata from disk.

What's new in recent kernels is the in-core inode unlinked
processing mechanisms have changed the way both the syscall and log
recovery mechanisms work (merged in 5.1, IIRC), and it looks like it
no longer blocks in log recovery like it used to. Given Christoph
first reported this generic/530 issue in May there's a fair
correlation indicating that the two are linked.

i.e. we changed the unlinked inode processing in a way that
the kernel can now runs tens of thousands of unlink transactions
without yeilding the CPU. That violated the "CIL push work will run
within a few transactions of the background push occurring"
mechanism the workqueue provided us with and that, fundamentally, is
the underlying issue here. It's not a CIL vs empty AIL vs log
reservation exhaustion race condition - that's just an observable
symptom.

To that end, I have been prototyping patches to fix this exact
problem as part of the non-blocking inode reclaim series. I've been
looking at this because the CIL pins so much memory on large logs
and I wanted to put an upper bound on it that wasn't measured in GBs
of RAM. Hence I'm planning to pull these out into a separate series
now as it's clear that non-preemptible kernels and workqueues do not
play well together and that the more we use workqueues for async
processing, the more we introduce a potential real-world vector for
CIL overruns...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux