On Thu, Jul 01, 2021 at 12:20:37PM -0700, Alexei Starovoitov wrote: [ ... ] > +static void htab_free_prealloced_timers(struct bpf_htab *htab) > +{ > + u32 num_entries = htab->map.max_entries; > + int i; > + > + if (likely(!map_value_has_timer(&htab->map))) > + return; > + if (htab_has_extra_elems(htab)) > + num_entries += num_possible_cpus(); > + > + for (i = 0; i < num_entries; i++) { > + struct htab_elem *elem; > + > + elem = get_htab_elem(htab, i); > + bpf_timer_cancel_and_free(elem->key + > + round_up(htab->map.key_size, 8) + > + htab->map.timer_off); > + cond_resched(); > + } > +} > + [ ... ] > +static void htab_free_malloced_timers(struct bpf_htab *htab) > +{ > + int i; > + > + for (i = 0; i < htab->n_buckets; i++) { > + struct hlist_nulls_head *head = select_bucket(htab, i); > + struct hlist_nulls_node *n; > + struct htab_elem *l; > + > + hlist_nulls_for_each_entry(l, n, head, hash_node) It is called from map_release_uref() which is not under rcu. Either a bucket lock or rcu_read_lock is needed here. Another question, can prealloc map does the same thing like here (i.e. walk the buckets) during map_release_uref()?