On Wed, Oct 02, 2019 at 08:41:39AM -0400, Brian Foster wrote: > On Wed, Oct 02, 2019 at 09:14:33AM +1000, Dave Chinner wrote: > > On Tue, Oct 01, 2019 at 09:13:36AM -0400, Brian Foster wrote: > > > On Tue, Oct 01, 2019 at 01:42:07PM +1000, Dave Chinner wrote: > > > > So typically groups of captures are hundreds of log cycles apart > > > > (100 cycles x 32MB = ~3GB of log writes), then there will be a > > > > stutter where the CIL dispatch is delayed, and then everything > > > > continues on. These all show the log is always around the 75% full > > > > (AIL tail pushing theshold) but the reservation grant wait lists are > > > > always empty so we're not running out of reservation space here. > > > > > > > > > > It's somewhat interesting that we manage to block every thread most of > > > the time before the CIL push task starts. I wonder a bit if that pattern > > > would hold for a system/workload with more CPUs (and if so, if there are > > > any odd side effects of stalling and waking hundreds of tasks at the > > > same time vs. our traditional queuing behavior). > > > > If I increase the concurrency (e.g. 16->32 threads for fsmark on a > > 64MB log), we hammer the spinlock on the grant head -hard-. i.e. CPU > > usage goes up by 40%, performance goes down by 50%, and all that CPU > > time is spent spinning on the reserve grant head lock. Basically, > > the log reservation space runs out, and we end up queuing on the > > reservation grant head and then we get reminded of just how bad > > having a serialisation point in the reservation fast path actually > > is for scalability... > > > > The small log case is not really what I'm wondering about. Does this > behavior translate to a similar test with a maximum sized log? Nope, the transactions all hit the CIL throttle within a couple of hundred microseconds of each other, then the CIL push schedules, and then a couple of hundred microseconds later they are unblocked because the CIL push has started. > ... > > > > Larger logs block more threads on the CIL throttle, but the 32MB CIL > > window can soak up hundreds of max sized transaction reservations > > before overflowing so even running several hundred concurrent > > modification threads I haven't been able to drive enough concurrency > > through the CIL to see any sort of adverse behaviour. And the > > workloads are running pretty consistently at less than 5,000 context > > switches/sec so there's no evidence of repeated thundering heard > > wakeup problems, either. > > > > That speaks to the rarity of the throttle, which is good. But I'm > wondering, for example, what might happen on systems where we could have > hundreds of physical CPUs committing to the CIL, we block them all on > the throttle and then wake them all at once. IOW, can we potentially > create the contention conditions you reproduce above in scenarios where > they might not have existed before? I don't think it will create any new contention points - the contention I described above can be triggered without the CIL throttle in place, too. It just requires enough concurrent transactions to exhaust the entire log reservation, and then we go from a lockless grant head reservation algorithm to a spinlock serialised waiting algorithm. i.e. the contention starts when we have enough concurrency to fall off the lockless fast path. So with a 2GB log and fast storage, we likely need a sustained workload of tens of thousands of concurrent transaction reservations to exhaust log space and drive us into this situation. We generally don't have applications that have this sort of concurrency capability... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx