On 8/28/2023 3:55 PM, Jiri Olsa wrote: > Adding support to gather stats for kprobe_multi programs. > > We now count: > - missed stats due to bpf_prog_active protection (always) > - cnt/nsec of the bpf program execution (if kernel.bpf_stats_enabled=1) > > Signed-off-by: Jiri Olsa <jolsa@xxxxxxxxxx> Acked-by: Hou Tao <houtao1@xxxxxxxxxx> With one nit below. > --- > kernel/trace/bpf_trace.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > index a7264b2c17ad..0a8685fc1eee 100644 > --- a/kernel/trace/bpf_trace.c > +++ b/kernel/trace/bpf_trace.c > @@ -2706,18 +2706,24 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link, > .link = link, > .entry_ip = entry_ip, > }; > + struct bpf_prog *prog = link->link.prog; > struct bpf_run_ctx *old_run_ctx; > + u64 start; > int err; > > if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) { > + bpf_prog_inc_misses_counter(prog); > err = 0; > goto out; > } > > + The extra empty line is not needed here.