He Fengqing wrote: > It seems inc_misses_counter() suffers from same issue fixed in > the commit d979617aa84d ("bpf: Fixes possible race in update_prog_stats() > for 32bit arches"): > As it can run while interrupts are enabled, it could > be re-entered and the u64_stats syncp could be mangled. > > Fixes: 9ed9e9ba2337 ("bpf: Count the number of times recursion was prevented") > Signed-off-by: He Fengqing <hefengqing@xxxxxxxxxx> > --- > kernel/bpf/trampoline.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) Appears possible through sleepable progs. Acked-by: John Fastabend <john.fastabend@xxxxxxxxx> > diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c > index 4b6974a195c1..5e7edf913060 100644 > --- a/kernel/bpf/trampoline.c > +++ b/kernel/bpf/trampoline.c > @@ -550,11 +550,12 @@ static __always_inline u64 notrace bpf_prog_start_time(void) > static void notrace inc_misses_counter(struct bpf_prog *prog) > { > struct bpf_prog_stats *stats; > + unsigned int flags; > > stats = this_cpu_ptr(prog->stats); > - u64_stats_update_begin(&stats->syncp); > + flags = u64_stats_update_begin_irqsave(&stats->syncp); > u64_stats_inc(&stats->misses); > - u64_stats_update_end(&stats->syncp); > + u64_stats_update_end_irqrestore(&stats->syncp, flags); > } > > /* The logic is similar to bpf_prog_run(), but with an explicit > -- > 2.25.1 >