On Mon, Aug 29, 2022 at 3:39 PM Martin KaFai Lau <kafai@xxxxxx> wrote: > > On Thu, Aug 25, 2022 at 07:44:16PM -0700, Alexei Starovoitov wrote: > > +/* Mostly runs from irq_work except __init phase. */ > > +static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node) > > +{ > > + struct mem_cgroup *memcg = NULL, *old_memcg; > > + unsigned long flags; > > + void *obj; > > + int i; > > + > > + memcg = get_memcg(c); > > + old_memcg = set_active_memcg(memcg); > > + for (i = 0; i < cnt; i++) { > > + obj = __alloc(c, node); > > + if (!obj) > > + break; > > + if (IS_ENABLED(CONFIG_PREEMPT_RT)) > > + /* In RT irq_work runs in per-cpu kthread, so disable > > + * interrupts to avoid preemption and interrupts and > > + * reduce the chance of bpf prog executing on this cpu > > + * when active counter is busy. > > + */ > > + local_irq_save(flags); > > + if (local_inc_return(&c->active) == 1) { > Is it because it is always '== 1' here so that there is > no need to free the obj when it is '!= 1' ? Great catch. It's a bug indeed.