Re: [PATCH 2/2] xfs: Throttle commits on delayed background CIL push

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 03, 2019 at 11:25:56AM +1000, Dave Chinner wrote:
> On Wed, Oct 02, 2019 at 08:41:39AM -0400, Brian Foster wrote:
> > On Wed, Oct 02, 2019 at 09:14:33AM +1000, Dave Chinner wrote:
> > > On Tue, Oct 01, 2019 at 09:13:36AM -0400, Brian Foster wrote:
> > > > On Tue, Oct 01, 2019 at 01:42:07PM +1000, Dave Chinner wrote:
> > > > > So typically groups of captures are hundreds of log cycles apart
> > > > > (100 cycles x 32MB = ~3GB of log writes), then there will be a
> > > > > stutter where the CIL dispatch is delayed, and then everything
> > > > > continues on. These all show the log is always around the 75% full
> > > > > (AIL tail pushing theshold) but the reservation grant wait lists are
> > > > > always empty so we're not running out of reservation space here.
> > > > > 
> > > > 
> > > > It's somewhat interesting that we manage to block every thread most of
> > > > the time before the CIL push task starts. I wonder a bit if that pattern
> > > > would hold for a system/workload with more CPUs (and if so, if there are
> > > > any odd side effects of stalling and waking hundreds of tasks at the
> > > > same time vs. our traditional queuing behavior).
> > > 
> > > If I increase the concurrency (e.g. 16->32 threads for fsmark on a
> > > 64MB log), we hammer the spinlock on the grant head -hard-. i.e. CPU
> > > usage goes up by 40%, performance goes down by 50%, and all that CPU
> > > time is spent spinning on the reserve grant head lock. Basically,
> > > the log reservation space runs out, and we end up queuing on the
> > > reservation grant head and then we get reminded of just how bad
> > > having a serialisation point in the reservation fast path actually
> > > is for scalability...
> > > 
> > 
> > The small log case is not really what I'm wondering about. Does this
> > behavior translate to a similar test with a maximum sized log?
> 
> Nope, the transactions all hit the CIL throttle within a couple of
> hundred microseconds of each other, then the CIL push schedules, and
> then a couple of hundred microseconds later they are unblocked
> because the CIL push has started.
> 
> > ...
> > > 
> > > Larger logs block more threads on the CIL throttle, but the 32MB CIL
> > > window can soak up hundreds of max sized transaction reservations
> > > before overflowing so even running several hundred concurrent
> > > modification threads I haven't been able to drive enough concurrency
> > > through the CIL to see any sort of adverse behaviour.  And the
> > > workloads are running pretty consistently at less than 5,000 context
> > > switches/sec so there's no evidence of repeated thundering heard
> > > wakeup problems, either.
> > > 
> > 
> > That speaks to the rarity of the throttle, which is good. But I'm
> > wondering, for example, what might happen on systems where we could have
> > hundreds of physical CPUs committing to the CIL, we block them all on
> > the throttle and then wake them all at once. IOW, can we potentially
> > create the contention conditions you reproduce above in scenarios where
> > they might not have existed before?
> 
> I don't think it will create any new contention points - the
> contention I described above can be triggered without the CIL
> throttle in place, too. It just requires enough concurrent
> transactions to exhaust the entire log reservation, and then we go
> from a lockless grant head reservation algorithm to a spinlock
> serialised waiting algorithm.  i.e. the contention starts when we
> have enough concurrency to fall off the lockless fast path.
> 
> So with a 2GB log and fast storage, we likely need a sustained
> workload of tens of thousands of concurrent transaction reservations
> to exhaust log space and drive us into this situation. We generally
> don't have applications that have this sort of concurrency
> capability...
> 

That there are some systems/configurations out there that are fast
enough to avoid this condition doesn't really answer the question. If
you assume something like a 1TB fs and 500MB log, with 1/4 the log
consumed in the AIL and 64MB in the CIL (such that transaction commits
start to block), the remaining log reservation can easily be consumed by
something on the order of 100 open transactions.

Hmm, I'm also not sure the lockless reservation algorithm is totally
immune to increased concurrency in this regard. What prevents multiple
tasks from racing through xlog_grant_head_check() and blowing past the
log head, for example?

I gave this a quick test out of curiosity and with a 15GB fs with a 10MB
log, I should only be able to send 5 or so truncate transactions through
xfs_log_reserve() before blocking. With a couple injected delays, I'm
easily able to send 32 into the grant space modification code and that
eventually results in something like this:

  truncate-1233  [002] ...1  1520.396545: xfs_log_reserve_exit: dev 253:4 t_ocnt 8 t_cnt 8 t_curr_res 266260 t_unit_res 266260 t_flags XLOG_TIC_INITED|XLOG_TIC_PERM_RESERV reserveq empty writeq empty grant_reserve_cycle 7 grant_reserve_bytes 5306880 grant_write_cycle 7 grant_write_bytes 5306880 curr_cycle 1 curr_block 115 tail_cycle 1 tail_block 115

... where the grant heads have not only blown the tail, but cycled
around the log multiple times. :/

Brian

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux