On Tue, Jun 20, 2023 at 10:11:15AM -0700, Alexei Starovoitov wrote: > On Tue, Jun 20, 2023 at 10:35:27AM +0200, Jiri Olsa wrote: > > +static int uprobe_prog_run(struct bpf_uprobe *uprobe, > > + unsigned long entry_ip, > > + struct pt_regs *regs) > > +{ > > + struct bpf_uprobe_multi_link *link = uprobe->link; > > + struct bpf_uprobe_multi_run_ctx run_ctx = { > > + .entry_ip = entry_ip, > > + }; > > + struct bpf_prog *prog = link->link.prog; > > + struct bpf_run_ctx *old_run_ctx; > > + int err = 0; > > + > > + might_fault(); > > + > > + rcu_read_lock_trace(); > > + migrate_disable(); > > + > > + if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) > > + goto out; > > bpf_prog_run_array_sleepable() doesn't do such things. > Such 'proteciton' will actively hurt. > The sleepable prog below will block all kprobes on this cpu. > please remove. ok makes sense, can't recall the reason why I added it jirka > > > + > > + old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx); > > + > > + if (!prog->aux->sleepable) > > + rcu_read_lock(); > > + > > + err = bpf_prog_run(link->link.prog, regs); > > + > > + if (!prog->aux->sleepable) > > + rcu_read_unlock(); > > + > > + bpf_reset_run_ctx(old_run_ctx); > > + > > +out: > > + __this_cpu_dec(bpf_prog_active); > > + migrate_enable(); > > + rcu_read_unlock_trace(); > > + return err;