On Thu, Jul 04, 2019 at 10:13:15AM -0700, Paul E. McKenney wrote: > On Wed, Jul 03, 2019 at 11:24:54PM -0400, Joel Fernandes wrote: > > On Wed, Jul 03, 2019 at 05:50:09PM -0700, Paul E. McKenney wrote: > > > On Wed, Jul 03, 2019 at 08:32:13PM -0400, Joel Fernandes wrote: > > [ . . . ] > > > > > If I add an rcu_perf_wait_shutdown() to the end of the loop, the outliers go away. > > > > > > > > Still can't explain that :) > > > > > > > > do { > > > > ... > > > > ... > > > > + rcu_perf_wait_shutdown(); > > > > } while (!torture_must_stop()); > > > > > > Might it be the cond_resched_tasks_rcu_qs() invoked from within > > > rcu_perf_wait_shutdown()? So I have to ask... What happens if you > > > use cond_resched_tasks_rcu_qs() at the end of that loop instead of > > > rcu_perf_wait_shutdown()? > > > > I don't think it is, if I call cond_resched_tasks_rcu_qs(), it still doesn't > > help. Only calling rcu_perf_wait_shutdown() cures it. > > My eyes seem to be working better today. > > Here is rcu_perf_wait_shutdown(): > > static void rcu_perf_wait_shutdown(void) > { > cond_resched_tasks_rcu_qs(); > if (atomic_read(&n_rcu_perf_writer_finished) < nrealwriters) > return; > while (!torture_must_stop()) > schedule_timeout_uninterruptible(1); > } > > Take a close look at the "while" loop. It is effectively ending your > test prematurely and thus rendering the code no longer CPU-bound. ;-) That makes a lot of sense. I also found that I can drop 'rcu_perf_wait_shutdown' in my preempt-disable loop as long as I don't do an ftrace trace. I suspect the trace dump happening at the end is messing with the last iteration of the writer loops. My preempt disable loop probably disables preemption for a long time without rescheduling during this ftrace dump. Anyway, having the rcu_perf_wait_shutdown without doing the ftrace dump seems to solve it. So actually the point of all my testing was (other than learning) was to compare how RCU pre-consolidated vs post-consolidated does. As predicted, with post-consolidated RCU, the preempt-disable / enable does manage to slow down the grace periods. This is not an issue per-se as you said that even 100s of ms of grace period delay is within acceptable RCU latencies. The results are as below: I am happy to try out any other test scenarios as well if you would like me to. I am open to any other suggestions you may have to improve the rcuperf tests in this (deferred/consolidated RCU) or other regards. I did have a request, could you help me understand why is the grace period duration double that of my busy wait time? You mentioned this has something to do with the thread not waking up before another GP is started. But I did not follow this. Thanks a lot!! Performance changes in consolidated vs regular ------------------------------------------- I ran a thread on a reserved CPU doing preempt disable + busy wait + preempt enable in a loop and measured the difference in rcuperf between conslidated and regular. nreaders = nwriters = 10. (preempt disable duration) 5ms 10ms 20ms 50ms v4.19 median (usecs) 12000.3 12001 11000 12000 v5.1 (deferred) median (usecs) 13000 19999 40000 100000 All of this is still within spec of RCU. Note as discussed: These results are independent of the value of jiffies_to_sched_qs. However, in my preempt-disable + enable loop, if I don't do a set_preempt_need_resched() in my loop, then I need to lower jiffies_to_sched_qs to bring down the grace period durations. This is understandable because the tick may not know sooner that it needs to resched the preempt disable busy loop. thanks, J.