Re: slow sync rcu_tasks_trace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 09, 2020 at 02:04:47PM -0700, Paul E. McKenney wrote:
> On Wed, Sep 09, 2020 at 12:48:28PM -0700, Alexei Starovoitov wrote:
> > On Wed, Sep 09, 2020 at 12:39:00PM -0700, Paul E. McKenney wrote:
> > > > > 
> > > > > When do you need this by?
> > > > > 
> > > > > Left to myself, I will aim for the merge window after the upcoming one,
> > > > > and then backport to the prior -stable versions having RCU tasks trace.
> > > > 
> > > > That would be too late.
> > > > We would have to disable sleepable bpf progs or convert them to srcu.
> > > > bcc/bpftrace have a limit of 1000 probes for regexes to make sure
> > > > these tools don't add too many kprobes to the kernel at once.
> > > > Right now fentry/fexit/freplace are using trampoline which does
> > > > synchronize_rcu_tasks(). My measurements show that it's roughly
> > > > equal to synchronize_rcu() on idle box and perfectly capable to
> > > > be a replacement for kprobe based attaching.
> > > > It's not uncommon to attach a hundred kprobes or fentry probes at
> > > > a start time. So bpf trampoline has to be able to do 1000 in a second.
> > > > And it was the case before sleepable got added to the trampoline.
> > > > Now it's doing:
> > > > synchronize_rcu_mult(call_rcu_tasks, call_rcu_tasks_trace);
> > > > and it's causing this massive slowdown which makes bpf trampoline
> > > > pretty much unusable and everything that builds on top suffers.
> > > > I can add a counter of sleepable progs to trampoline and do
> > > > either sync rcu_tasks or sync_mult(tasks, tasks_trace),
> > > > but we've discussed exactly that idea few months back and concluded that
> > > > rcu_tasks is likely to be heavier than rcu_tasks_trace, so I didn't
> > > > bother with the counter. I can still add it, but slow rcu_tasks_trace
> > > > means that sleepable progs are not usable due to slow startup time,
> > > > so have to do something with sleepable anyway.
> > > > So "when do you need this by?" the answer is asap.
> > > > I'm considering such changes to be a bugfix, not a feture.
> > > 
> > > Got it.
> > > 
> > > With the patch below, I am able to reproduce this issue, as expected.
> > 
> > I think your tests is more stressful than mine.
> > test_progs -t trampoline_count
> > doesn't run the sleepable progs. So there is no lock/unlock_trace at all.
> > It's updating trampoline and doing sync_mult() that's all.
> > 
> > > My plan is to try the following:
> > > 
> > > 1.	Parameterize the backoff sequence so that RCU Tasks Trace
> > > 	uses faster rechecking than does RCU Tasks.  Experiment as
> > > 	needed to arrive at a good backoff value.
> > > 
> > > 2.	If the tasks-list scan turns out to be a tighter bottleneck 
> > > 	than the backoff waits, look into parallelizing this scan.
> > > 	(This seems unlikely, but the fact remains that RCU Tasks
> > > 	Trace must do a bit more work per task than RCU Tasks.)
> > > 
> > > 3.	If these two approaches, still don't get the update-side
> > > 	latency where it needs to be, improvise.
> > > 
> > > The exact path into mainline will of course depend on how far down this
> > > list I must go, but first to get a solution.
> > 
> > I think there is a case of 4. Nothing is inside rcu_trace critical section.
> > I would expect single ipi would confirm that.
> 
> Unless the task moves, yes.  So a single IPI should suffice in the
> common case.

And what I am doing now is checking code paths.

							Thanx, Paul



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux