Re: [PATCH RFC v3 bpf-next 1/4] bpf: Introduce sleepable BPF programs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 11, 2020 at 05:04:47PM -0700, Paul E. McKenney wrote:
> On Thu, Jun 11, 2020 at 03:29:09PM -0700, Alexei Starovoitov wrote:
> > On Thu, Jun 11, 2020 at 3:23 PM Alexei Starovoitov
> > <alexei.starovoitov@xxxxxxxxx> wrote:
> > >
> > >  /* dummy _ops. The verifier will operate on target program's ops. */
> > >  const struct bpf_verifier_ops bpf_extension_verifier_ops = {
> > > @@ -205,14 +206,12 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
> > >             tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs)
> > >                 flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
> > >
> > > -       /* Though the second half of trampoline page is unused a task could be
> > > -        * preempted in the middle of the first half of trampoline and two
> > > -        * updates to trampoline would change the code from underneath the
> > > -        * preempted task. Hence wait for tasks to voluntarily schedule or go
> > > -        * to userspace.
> > > +       /* the same trampoline can hold both sleepable and non-sleepable progs.
> > > +        * synchronize_rcu_tasks_trace() is needed to make sure all sleepable
> > > +        * programs finish executing. It also ensures that the rest of
> > > +        * generated tramopline assembly finishes before updating trampoline.
> > >          */
> > > -
> > > -       synchronize_rcu_tasks();
> > > +       synchronize_rcu_tasks_trace();
> > 
> > Hi Paul,
> > 
> > I've been looking at rcu_trace implementation and I think above change
> > is correct.
> > Could you please double check my understanding?
> 
> From an RCU Tasks Trace perspective, it looks good to me!
> 
> You have rcu_read_lock_trace() and rcu_read_unlock_trace() protecting
> the readers and synchronize_rcu_trace() waiting for them.
> 
> One question given my lack of understanding of BPF:  Are there still
> tramoplines for non-sleepable BPF programs?  If so, they might still
> need to use synchronize_rcu_tasks() or some such.

The same trampoline can hold both sleepable and non-sleepable progs.
The following is possible:
. trampoline asm starts
  . rcu_read_lock + migrate_disable
    . non-sleepable prog_A
  . rcu_read_unlock + migrate_enable
. trampoline asm
  . rcu_read_lock_trace
    . sleepable prog_B
  . rcu_read_unlock_trace
. trampoline asm
  . rcu_read_lock + migrate_disable
    . non-sleepable prog_C
  . rcu_read_unlock + migrate_enable
. trampoline asm ends

> 
> The general principle is "never mix one type of RCU reader with another
> type of RCU updater".
> 
> But in this case, one approach is to use synchronize_rcu_mult():
> 
> 	synchronize_rcu_mult(call_rcu_tasks, call_rcu_tasks_trace);

That was my first approach, but I've started looking deeper and looks
like rcu_tasks_trace is stronger than rcu_tasks.
'never mix' is a valid concern, so for future proofing the rcu_mult()
is cleaner, but from safety pov just sync*rcu_tasks_trace() is enough
even when trampoline doesn't hold sleepable progs, right ?

Also timing wise rcu_mult() is obviously faster than doing
one at a time, but how do you sort their speeds:
A: synchronize_rcu_mult(call_rcu_tasks, call_rcu_tasks_trace);
B: synchronize_rcu_tasks();
C: synchronize_rcu_tasks_trace();

> That would wait for both types of readers, and do so concurrently.
> And if there is also a need to wait on rcu_read_lock() and friends,
> you could do this:
> 
> 	synchronize_rcu_mult(call_rcu, call_rcu_tasks, call_rcu_tasks_trace);

I was about to reply that trampoline doesn't need it and there is no such
case yet, but then realized that I can use it in hashtab freeing with:
synchronize_rcu_mult(call_rcu, call_rcu_tasks_trace);
That would be nice optimization.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux