On Mon, Aug 29, 2022 at 2:30 PM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote: > > On 8/26/22 4:44 AM, Alexei Starovoitov wrote: > [...] > > + > > +/* Called from BPF program or from sys_bpf syscall. > > + * In both cases migration is disabled. > > + */ > > +void notrace *bpf_mem_alloc(struct bpf_mem_alloc *ma, size_t size) > > +{ > > + int idx; > > + void *ret; > > + > > + if (!size) > > + return ZERO_SIZE_PTR; > > + > > + idx = bpf_mem_cache_idx(size + LLIST_NODE_SZ); > > + if (idx < 0) > > + return NULL; > > + > > + ret = unit_alloc(this_cpu_ptr(ma->caches)->cache + idx); > > + return !ret ? NULL : ret + LLIST_NODE_SZ; > > +} > > + > > +void notrace bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr) > > +{ > > + int idx; > > + > > + if (!ptr) > > + return; > > + > > + idx = bpf_mem_cache_idx(__ksize(ptr - LLIST_NODE_SZ)); > > + if (idx < 0) > > + return; > > + > > + unit_free(this_cpu_ptr(ma->caches)->cache + idx, ptr); > > +} > > + > > +void notrace *bpf_mem_cache_alloc(struct bpf_mem_alloc *ma) > > +{ > > + void *ret; > > + > > + ret = unit_alloc(this_cpu_ptr(ma->cache)); > > + return !ret ? NULL : ret + LLIST_NODE_SZ; > > +} > > + > > +void notrace bpf_mem_cache_free(struct bpf_mem_alloc *ma, void *ptr) > > +{ > > + if (!ptr) > > + return; > > + > > + unit_free(this_cpu_ptr(ma->cache), ptr); > > +} > > Looks like smp_processor_id() needs to be made aware that preemption might > be ok just not migration to a different CPU? ahh. migration is not disabled when map is freed from worker. this_cpu_ptr above and local_irq_save shortly after need to happen on the same cpu, so I'm thinking to add migrate_disable to htab free path.