On Sat, Jan 16, 2021 at 8:22 PM Cong Wang <xiyou.wangcong@xxxxxxxxx> wrote: > +static void htab_gc(struct work_struct *work) > +{ > + struct htab_elem *e, *tmp; > + struct llist_node *lhead; > + struct bpf_htab *htab; > + int i, count; > + > + htab = container_of(work, struct bpf_htab, gc_work.work); > + lhead = llist_del_all(&htab->gc_list); > + > + llist_for_each_entry_safe(e, tmp, lhead, gc_node) { > + unsigned long flags; > + struct bucket *b; > + u32 hash; > + > + hash = e->hash; > + b = __select_bucket(htab, hash); > + if (htab_lock_bucket(htab, b, hash, &flags)) > + continue; > + hlist_nulls_del_rcu(&e->hash_node); > + atomic_set(&e->pending, 0); > + free_htab_elem(htab, e); > + htab_unlock_bucket(htab, b, hash, flags); > + > + cond_resched(); > + } > + > + for (count = 0, i = 0; i < htab->n_buckets; i++) { I just realized a followup fix is not folded into this patch, I actually added a timestamp check here to avoid scanning the whole table more frequently than once per second. It is clearly my mistake to miss it when formatting this patchset. I will send v5 after waiting for other feedback. Thanks!