On Tue, 30 Aug 2022 at 00:43, Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> wrote: > > On Mon, Aug 29, 2022 at 3:39 PM Martin KaFai Lau <kafai@xxxxxx> wrote: > > > > On Thu, Aug 25, 2022 at 07:44:16PM -0700, Alexei Starovoitov wrote: > > > +/* Mostly runs from irq_work except __init phase. */ > > > +static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node) > > > +{ > > > + struct mem_cgroup *memcg = NULL, *old_memcg; > > > + unsigned long flags; > > > + void *obj; > > > + int i; > > > + > > > + memcg = get_memcg(c); > > > + old_memcg = set_active_memcg(memcg); > > > + for (i = 0; i < cnt; i++) { > > > + obj = __alloc(c, node); > > > + if (!obj) > > > + break; > > > + if (IS_ENABLED(CONFIG_PREEMPT_RT)) > > > + /* In RT irq_work runs in per-cpu kthread, so disable > > > + * interrupts to avoid preemption and interrupts and > > > + * reduce the chance of bpf prog executing on this cpu > > > + * when active counter is busy. > > > + */ > > > + local_irq_save(flags); > > > + if (local_inc_return(&c->active) == 1) { > > Is it because it is always '== 1' here so that there is > > no need to free the obj when it is '!= 1' ? > > Great catch. It's a bug indeed. Is it a bug? It seems it will always be true for alloc_bulk. IIUC it is only needed to prevent NMI's unit_alloc, unit_free touching free_llist, so that NMI llist_adds atomically to free_llist_nmi instead. Since this runs from irq_work, we run exclusive to other unit_free, otherwise the __llist_add wouldn't be safe either. unit_free already does local_irq_save.