On Fri, 14 Feb 2020 14:39:21 +0100 Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote: > __bpf_trace_run() disables preemption around the BPF_PROG_RUN() invocation. > > This is redundant because __bpf_trace_run() is invoked from a trace point > via __DO_TRACE() which already disables preemption _before_ invoking any of > the functions which are attached to a trace point. > > Remove it. > > Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > --- > kernel/trace/bpf_trace.c | 2 -- > 1 file changed, 2 deletions(-) > > --- a/kernel/trace/bpf_trace.c > +++ b/kernel/trace/bpf_trace.c > @@ -1476,9 +1476,7 @@ static __always_inline > void __bpf_trace_run(struct bpf_prog *prog, u64 *args) > { Should there be a "cant_migrate()" added here? -- Steve > rcu_read_lock(); > - preempt_disable(); > (void) BPF_PROG_RUN(prog, args); > - preempt_enable(); > rcu_read_unlock(); > } >