On Wed, Aug 30, 2023 at 10:04:05AM +0200, Sebastian Andrzej Siewior wrote: > __bpf_prog_enter() assigns bpf_tramp_run_ctx::saved_run_ctx before I guess you meant __bpf_prog_enter_recur right? > performing the recursion check which means in case of a recursion > __bpf_prog_exit() uses the previously set > bpf_tramp_run_ctx::saved_run_ctx value. > > __bpf_prog_enter_sleepable() assigns bpf_tramp_run_ctx::saved_run_ctx __bpf_prog_enter_sleepable_recur ? > after the recursion check which means in case of a recursion > __bpf_prog_exit_sleepable() uses an uninitialized value. > This does not look right. If I read the entry trampoline code right, > then bpf_tramp_run_ctx isn't initialized upfront. > > Align __bpf_prog_enter_sleepable() with __bpf_prog_enter() and set ditto > bpf_tramp_run_ctx::saved_run_ctx before the recursion check is made. > Remove the assignment of saved_run_ctx in kern_sys_bpf() since it > happens a few cycles later. > > Fixes: e384c7b7b46d0 ("bpf, x86: Create bpf_tramp_run_ctx on the caller thread's stack") > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> makes sense to me.. I ran selftests and all passed CI seems to fail due to unrelated issues that are just being fixed Acked-by: Jiri Olsa <jolsa@xxxxxxxxxx> jirka > --- > kernel/bpf/syscall.c | 1 - > kernel/bpf/trampoline.c | 5 ++--- > 2 files changed, 2 insertions(+), 4 deletions(-) > > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c > index c925c270ed8b4..1480b6cf12f06 100644 > --- a/kernel/bpf/syscall.c > +++ b/kernel/bpf/syscall.c > @@ -5304,7 +5304,6 @@ int kern_sys_bpf(int cmd, union bpf_attr *attr, unsigned int size) > } > > run_ctx.bpf_cookie = 0; > - run_ctx.saved_run_ctx = NULL; > if (!__bpf_prog_enter_sleepable_recur(prog, &run_ctx)) { > /* recursion detected */ > __bpf_prog_exit_sleepable_recur(prog, 0, &run_ctx); > diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c > index 78acf28d48732..53ff50cac61ea 100644 > --- a/kernel/bpf/trampoline.c > +++ b/kernel/bpf/trampoline.c > @@ -926,13 +926,12 @@ u64 notrace __bpf_prog_enter_sleepable_recur(struct bpf_prog *prog, > migrate_disable(); > might_fault(); > > + run_ctx->saved_run_ctx = bpf_set_run_ctx(&run_ctx->run_ctx); > + > if (unlikely(this_cpu_inc_return(*(prog->active)) != 1)) { > bpf_prog_inc_misses_counter(prog); > return 0; > } > - > - run_ctx->saved_run_ctx = bpf_set_run_ctx(&run_ctx->run_ctx); > - > return bpf_prog_start_time(); > } > > -- > 2.40.1 >