On Tue, Nov 26, 2024 at 4:34 PM Hou Tao <houtao@xxxxxxxxxxxxxxx> wrote: > > 2. nodes are freed before invoking spin_unlock_irqrestore(). Therefore, > there is no need to add paired migrate_{disable|enable}() calls for > these free operations. ... > if (ret) > - kfree(new_node); > + bpf_mem_cache_free(&trie->ma, new_node); > + bpf_mem_cache_free_rcu(&trie->ma, free_node); > spin_unlock_irqrestore(&trie->lock, irq_flags); > - kfree_rcu(free_node, rcu); ... > + bpf_mem_cache_free_rcu(&trie->ma, free_parent); > + bpf_mem_cache_free_rcu(&trie->ma, free_node); > spin_unlock_irqrestore(&trie->lock, irq_flags); > - kfree_rcu(free_parent, rcu); > - kfree_rcu(free_node, rcu); going back to under lock wasn't obvious. I only understood after reading the commit log for the 2nd time. Probably a code comment would be good. Though I wonder whether we should add migrate_disable/enable in the syscall path of these callbacks. We already wrapped them with rcu_read_lock(). Extra migrate_disable() won't hurt. And it will help this patch. bpf_mem_cache_free_rcu() can be done outside of bucket lock. bpf_ma can easily exhaust the free list in irq disabled region, so the more operations outside of the known irq region the better. Also it will help remove migrate_disable/enable from a bunch of places in kernel/bpf where we added them due to syscall path or map free path. It's certainly a follow up, if you agree. This patch set will go through bpf tree (after hopefully few more acks from reviewers)