Re: Normal RCU grace period can be stalled for long because need-resched flags not set?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 03, 2019 at 10:39:35AM -0700, Paul E. McKenney wrote:
> On Wed, Jul 03, 2019 at 12:41:34PM -0400, Joel Fernandes wrote:
> > On Wed, Jul 03, 2019 at 11:30:36AM -0400, Steven Rostedt wrote:
> > > On Wed, 3 Jul 2019 11:25:20 -0400
> > > Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote:
> > > 
> > > 
> > > > I am sorry if this is not a realistic real-life problem, but more a
> > > > "doctor it hurts if I do this" problem as Steven once said ;-)
> > > > 
> > > > I'll keep poking ;-)
> > > 
> > > Hi Joel,
> > > 
> > > Can you also share the tests you are performing as well as any
> > > module/code changes you made so that we can duplicate the results?
> > 
> > Sure thing. Below is the diff that I applied to Paul's /dev branch. But I
> > believe Linus's tree should have same results.
> > 
> > After applying the diff below, I run it like this:
> > tools/testing/selftests/rcutorture/bin/kvm.sh --bootargs rcuperf.pd_test=1 rcuperf.pd_busy_wait=5000 rcuperf.holdout=5 rcuperf.pd_resched=0 --duration 1 --torture rcuperf
> > 
> > Some new options I added:
> > pd_test=1 runs the preempt disable loop test
> > pd_busy_wait is the busy wait time each pass through the loop in microseconds
> > pd_resched is whether the loop should set the need-resched flag periodically.
> > 
> > If your qemu is a bit old or from debian, then you may also need to pass: --qemu-args "-net nic,model=e1000"
> > 
> > With pd_resched = 0, I get quite high average grace-period latencies. The
> > preempt-disable loop thread is running on its own CPU. Enabling the rcu:*
> > tracepoints, I see that for long periods of time, the FQS rcu loop can be
> > running while the scheduler tick learns from rcu_preempt_deferred_qs() that
> > there's nothing to worry about (at least this is what I remember tracing).
> > 
> > With pd_resched = 0, the output of the command above:
> > Average grace-period duration: 195629 microseconds
> > Minimum grace-period duration: 30111.7
> > 50th percentile grace-period duration: 211000
> > 90th percentile grace-period duration: 218000
> > 99th percentile grace-period duration: 222999
> > Maximum grace-period duration: 236351
> > 
> > With pd_resched = 1, you get more like twice (10ms) the busy-wait time (5ms).
> > I wonder why its twice, but that's still Ok. It is as follows:
> > Average grace-period duration: 12302.2 microseconds
> > Minimum grace-period duration: 5998.35
> > 50th percentile grace-period duration: 12000.4
> > 90th percentile grace-period duration: 15996.4
> > 99th percentile grace-period duration: 18000.6
> > Maximum grace-period duration: 20998.6
> 
> Both of these results are within the design range for normal
> RCU grace-period durations on busy systems.  See the code in
> adjust_jiffies_till_sched_qs(), which is setting one of the "panic
> durations" at which RCU starts taking more aggressive actions to end
> the current grace period.  See especially:
> 
> 	if (j < HZ / 10 + nr_cpu_ids / RCU_JIFFIES_FQS_DIV)
> 		j = HZ / 10 + nr_cpu_ids / RCU_JIFFIES_FQS_DIV;
> 	pr_info("RCU calculated value of scheduler-enlistment delay is %ld jiffies.\n", j);
> 	WRITE_ONCE(jiffies_to_sched_qs, j);
> 
> This usually gets you about 100 milliseconds, and if you are starting
> grace periods in quick succession from a single thread while other threads
> are doing likewise, each grace-period wait gets to wait about two grace
> periods worth due to the end of the previous grace period having started
> a new grace period before the thread is awakened.
> 
> Of course, if this is causing trouble for some use case, it would not
> be hard to create a tunable to override this panic duration.  But that
> would of course require a real use case in real use, given that RCU isn't
> exactly short on tunables at the moment.  Significantly shortening this
> panic duration caused 0day to complain about slowness last I tried it,
> just so you know.

Thanks a lot for the explanation.
Indeed this code in the tick is doing a good job and I just had to drop
jiffies_till_first_fqs to bring down the latencies. With a
jiffies_till_first_fqs of 50 instead of the default of 100, the latencies
drop by 4 fold.

In the tick:
        if (smp_load_acquire(this_cpu_ptr(&rcu_data.rcu_urgent_qs))) {
                /* Idle and userspace execution already are quiescent states. */
                if (!rcu_is_cpu_rrupt_from_idle() && !user) {
                        set_preempt_need_resched();     <--------\
                        set_tsk_need_resched(current);  <------- the preempt
								count test loop
								stands no chance!
                }
                __this_cpu_write(rcu_data.rcu_urgent_qs, false);
        }

Appreciate it! thanks,

 - Joel




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux