On Wed, Jul 5, 2023 at 9:00 AM Anton Protopopov <aspsk@xxxxxxxxxxxxx> wrote: > > Initialize and utilize the per-cpu insertions/deletions counters for hash-based > maps. Non-trivial changes apply to preallocated maps for which the > {inc,dec}_elem_count functions are not called, as there's no need in counting > elements to sustain proper map operations. > > To increase/decrease percpu counters for preallocated hash maps we add raw > calls to the bpf_map_{inc,dec}_elem_count functions so that the impact is > minimal. For dynamically allocated maps we add corresponding calls to the > existing {inc,dec}_elem_count functions. > > For LRU maps bpf_map_{inc,dec}_elem_count added to the lru pop/free helpers. > > Signed-off-by: Anton Protopopov <aspsk@xxxxxxxxxxxxx> > --- > kernel/bpf/hashtab.c | 23 +++++++++++++++++++++-- > 1 file changed, 21 insertions(+), 2 deletions(-) > > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c > index 56d3da7d0bc6..c23557bf9a1a 100644 > --- a/kernel/bpf/hashtab.c > +++ b/kernel/bpf/hashtab.c > @@ -302,6 +302,7 @@ static struct htab_elem *prealloc_lru_pop(struct bpf_htab *htab, void *key, > struct htab_elem *l; > > if (node) { > + bpf_map_inc_elem_count(&htab->map); > l = container_of(node, struct htab_elem, lru_node); > memcpy(l->key, key, htab->map.key_size); > return l; > @@ -581,10 +582,17 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr) > } > } > > + err = bpf_map_init_elem_count(&htab->map); > + if (err) > + goto free_extra_elements; > + > return &htab->map; > > +free_extra_elements: > + free_percpu(htab->extra_elems); > free_prealloc: > - prealloc_destroy(htab); > + if (prealloc) > + prealloc_destroy(htab); This is a bit difficult to read. I think the logic would be easier to understand if bpf_map_init_elem_count was done right before htab->buckets = bpf_map_area_alloc() and if (err) goto free_htab where you would add bpf_map_free_elem_count.