Hi, On 8/28/2023 3:55 PM, Jiri Olsa wrote: > Count runtime stats for bf programs executed through bpf_prog_run_array > function. That covers kprobe, perf event and trace syscall probe. > > Signed-off-by: Jiri Olsa <jolsa@xxxxxxxxxx> > --- > include/linux/bpf.h | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h > index 478fdc4794c9..732253eea675 100644 > --- a/include/linux/bpf.h > +++ b/include/linux/bpf.h > @@ -2715,10 +2715,11 @@ bpf_prog_run_array(const struct bpf_prog_array *array, > const void *ctx, bpf_prog_run_fn run_prog) > { > const struct bpf_prog_array_item *item; > - const struct bpf_prog *prog; > + struct bpf_prog *prog; > struct bpf_run_ctx *old_run_ctx; > struct bpf_trace_run_ctx run_ctx; > u32 ret = 1; > + u64 start; > > RCU_LOCKDEP_WARN(!rcu_read_lock_held(), "no rcu lock held"); > > @@ -2732,7 +2733,9 @@ bpf_prog_run_array(const struct bpf_prog_array *array, > item = &array->items[0]; > while ((prog = READ_ONCE(item->prog))) { > run_ctx.bpf_cookie = item->bpf_cookie; > + start = bpf_prog_start_time(); > ret &= run_prog(prog, ctx); > + bpf_prog_update_prog_stats(prog, start); > item++; > } bpf_prog_run() has already accounted the running count and the consumed time for the prog, so I think both previous patch and this patch are not needed. > bpf_reset_run_ctx(old_run_ctx);