Hi, On 2/25/2025 6:16 AM, Alexei Starovoitov wrote: > From: Alexei Starovoitov <ast@xxxxxxxxxx> > > Fix the following deadlock: > CPU A > _free_event() > perf_kprobe_destroy() > mutex_lock(&event_mutex) > perf_trace_event_unreg() > synchronize_rcu_tasks_trace() > > There are several paths where _free_event() grabs event_mutex > and calls sync_rcu_tasks_trace. Above is one such case. > > CPU B > bpf_prog_test_run_syscall() > rcu_read_lock_trace() > bpf_prog_run_pin_on_cpu() > bpf_prog_load() > bpf_tracing_func_proto() > trace_set_clr_event() > mutex_lock(&event_mutex) Considering the unregistered case is not so frequency, would it better to use mutex_trylock firstly, then fallback to workqueue when the event_mutex is busy ? > > Delegate trace_set_clr_event() to workqueue to avoid > such lock dependency. > > Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> > --- > kernel/trace/bpf_trace.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > index a612f6f182e5..13bef2462e94 100644 > --- a/kernel/trace/bpf_trace.c > +++ b/kernel/trace/bpf_trace.c > @@ -392,7 +392,7 @@ static const struct bpf_func_proto bpf_trace_printk_proto = { > .arg2_type = ARG_CONST_SIZE, > }; > > -static void __set_printk_clr_event(void) > +static void __set_printk_clr_event(struct work_struct *work) > { > /* > * This program might be calling bpf_trace_printk, > @@ -405,10 +405,11 @@ static void __set_printk_clr_event(void) > if (trace_set_clr_event("bpf_trace", "bpf_trace_printk", 1)) > pr_warn_ratelimited("could not enable bpf_trace_printk events"); > } > +static DECLARE_WORK(set_printk_work, __set_printk_clr_event); > > const struct bpf_func_proto *bpf_get_trace_printk_proto(void) > { > - __set_printk_clr_event(); > + schedule_work(&set_printk_work); > return &bpf_trace_printk_proto; > } > > @@ -451,7 +452,7 @@ static const struct bpf_func_proto bpf_trace_vprintk_proto = { > > const struct bpf_func_proto *bpf_get_trace_vprintk_proto(void) > { > - __set_printk_clr_event(); > + schedule_work(&set_printk_work); > return &bpf_trace_vprintk_proto; > } >