On Mon, Jun 26, 2023 at 09:09:20AM -0700, Alexei Starovoitov wrote: > On Mon, Jun 26, 2023 at 8:42 AM David Vernet <void@xxxxxxxxxxxxx> wrote: > > > > On Fri, Jun 23, 2023 at 08:13:33PM -0700, Alexei Starovoitov wrote: > > > From: Alexei Starovoitov <ast@xxxxxxxxxx> > > > > > > Convert bpf_cpumask to bpf_mem_cache_free_rcu. > > > > > > Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> > > > > Acked-by: David Vernet <void@xxxxxxxxxxxxx> > > > > LGTM, thanks for cleaning this up. I left one drive-by comment / > > observation below, but it's not a blocker for this patch / series. > > > > > --- > > > kernel/bpf/cpumask.c | 20 ++++++-------------- > > > 1 file changed, 6 insertions(+), 14 deletions(-) > > > > > > diff --git a/kernel/bpf/cpumask.c b/kernel/bpf/cpumask.c > > > index 938a60ff4295..6983af8e093c 100644 > > > --- a/kernel/bpf/cpumask.c > > > +++ b/kernel/bpf/cpumask.c > > > @@ -9,7 +9,6 @@ > > > /** > > > * struct bpf_cpumask - refcounted BPF cpumask wrapper structure > > > * @cpumask: The actual cpumask embedded in the struct. > > > - * @rcu: The RCU head used to free the cpumask with RCU safety. > > > * @usage: Object reference counter. When the refcount goes to 0, the > > > * memory is released back to the BPF allocator, which provides > > > * RCU safety. > > > @@ -25,7 +24,6 @@ > > > */ > > > struct bpf_cpumask { > > > cpumask_t cpumask; > > > - struct rcu_head rcu; > > > refcount_t usage; > > > }; > > > > > > @@ -82,16 +80,6 @@ __bpf_kfunc struct bpf_cpumask *bpf_cpumask_acquire(struct bpf_cpumask *cpumask) > > > return cpumask; > > > } > > > > > > -static void cpumask_free_cb(struct rcu_head *head) > > > -{ > > > - struct bpf_cpumask *cpumask; > > > - > > > - cpumask = container_of(head, struct bpf_cpumask, rcu); > > > - migrate_disable(); > > > - bpf_mem_cache_free(&bpf_cpumask_ma, cpumask); > > > - migrate_enable(); > > > -} > > > - > > > /** > > > * bpf_cpumask_release() - Release a previously acquired BPF cpumask. > > > * @cpumask: The cpumask being released. > > > @@ -102,8 +90,12 @@ static void cpumask_free_cb(struct rcu_head *head) > > > */ > > > __bpf_kfunc void bpf_cpumask_release(struct bpf_cpumask *cpumask) > > > { > > > - if (refcount_dec_and_test(&cpumask->usage)) > > > - call_rcu(&cpumask->rcu, cpumask_free_cb); > > > + if (!refcount_dec_and_test(&cpumask->usage)) > > > + return; > > > + > > > + migrate_disable(); > > > + bpf_mem_cache_free_rcu(&bpf_cpumask_ma, cpumask); > > > + migrate_enable(); > > > > The fact that callers have to disable migration like this in order to > > safely free the memory feels a bit leaky. Is there any reason we can't > > move this into bpf_mem_{cache_}free_rcu()? > > migrate_disable/enable() are actually not necessary here. > We can call bpf_mem_cache_free_rcu() directly from any kfunc. Could you please clarify why? Can't we migrate if the kfunc is called from a sleepable struct_ops callback? If migration is always disabled for any kfunc then I agree these migrate_{en,dis}able() calls can be removed. Otherwise from my reading of the code we'd race between calling this_cpu_ptr() and the local_irq_save() in unit_free(). Thanks, David > Explicit migrate_disable() is only necessary from syscall. > > I believe rcu callbacks also cannot migrate, so the existing > code probably doesn't need them either.