On 11/18/24 5:30 AM, Sebastian Andrzej Siewior wrote:
On 2024-11-18 09:08:05 [+0800], Hou Tao wrote:
diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
index d447a6dab83b..d8995acecedf 100644
--- a/kernel/bpf/lpm_trie.c
+++ b/kernel/bpf/lpm_trie.c
@@ -319,6 +326,25 @@ static int trie_check_noreplace_update(const struct lpm_trie *trie, u64 flags)
return 0;
}
+static void lpm_trie_node_free(struct lpm_trie *trie,
+ struct lpm_trie_node *node, bool defer)
+{
+ struct bpf_mem_alloc *ma;
+
+ if (!node)
+ return;
+
+ ma = (node->flags & LPM_TREE_NODE_FLAG_ALLOC_LEAF) ? trie->leaf_ma :
+ trie->im_ma;
+
+ migrate_disable();
+ if (defer)
+ bpf_mem_cache_free_rcu(ma, node);
+ else
+ bpf_mem_cache_free(ma, node);
+ migrate_enable();
I guess a preempt_disable() here instead wouldn't hurt much. The inner
pieces of the allocator (unit_free()) does local_irq_save() for the
entire function so we don't win much with migrate_disable().
Typically, bpf_mem_*() functions are surrounded directly
or indirectly by migrate_disable/enable. Let us just keep
this pattern to be consistent with other similar usage.
One close example is in kernel/bpf/cpumask.c.
+}
+
Sebastian