From: Hou Tao <houtao1@xxxxxxxxxx> bpf_local_storage_map_free() invokes bpf_selem_unlink() to unlink and free the element from the storage map. It may invoke bpf_obj_free_fields() to free the special fields in the map value. Since these special fields may be allocated from bpf memory allocator, migrate_{disable|enable} pairs are necessary for the freeing of these objects. To simplify reasoning about when migrate_disable() is needed for the freeing of these dynamically-allocated objects, let the caller to guarantee migration is disabled before invoking bpf_obj_free_fields(). Therefore, the patch moves migrate_{disable|enable} out of the busy_counter condition and calls migrate_disable() before invoking bpf_obj_free_fields(). The migrate_{disable|enable} pairs in the underlying implementation of bpf_obj_free_fields() will be removed by the following patch. Signed-off-by: Hou Tao <houtao1@xxxxxxxxxx> --- kernel/bpf/bpf_local_storage.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c index 615a3034baeb..b649cf736438 100644 --- a/kernel/bpf/bpf_local_storage.c +++ b/kernel/bpf/bpf_local_storage.c @@ -905,15 +905,13 @@ void bpf_local_storage_map_free(struct bpf_map *map, while ((selem = hlist_entry_safe( rcu_dereference_raw(hlist_first_rcu(&b->list)), struct bpf_local_storage_elem, map_node))) { - if (busy_counter) { - migrate_disable(); + migrate_disable(); + if (busy_counter) this_cpu_inc(*busy_counter); - } bpf_selem_unlink(selem, true); - if (busy_counter) { + if (busy_counter) this_cpu_dec(*busy_counter); - migrate_enable(); - } + migrate_enable(); cond_resched_rcu(); } rcu_read_unlock(); -- 2.29.2