This is a note to let you know that I've just added the patch titled bpf: add percpu stats for bpf_map elements insertions/deletions to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: bpf-add-percpu-stats-for-bpf_map-elements-insertions.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 4fcd45cc2334be78cdb680b7c4d08d1bc16a3b3e Author: Anton Protopopov <aspsk@xxxxxxxxxxxxx> Date: Thu Jul 6 13:39:28 2023 +0000 bpf: add percpu stats for bpf_map elements insertions/deletions [ Upstream commit 25954730461af01f66afa9e17036b051986b007e ] Add a generic percpu stats for bpf_map elements insertions/deletions in order to keep track of both, the current (approximate) number of elements in a map and per-cpu statistics on update/delete operations. To expose these stats a particular map implementation should initialize the counter and adjust it as needed using the 'bpf_map_*_elem_count' helpers provided by this commit. Signed-off-by: Anton Protopopov <aspsk@xxxxxxxxxxxxx> Link: https://lore.kernel.org/r/20230706133932.45883-2-aspsk@xxxxxxxxxxxxx Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> Stable-dep-of: 876673364161 ("bpf: Defer the free of inner map when necessary") Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/include/linux/bpf.h b/include/linux/bpf.h index ba22cf4f5fc0..21b192ce018a 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -249,6 +249,7 @@ struct bpf_map { } owner; bool bypass_spec_v1; bool frozen; /* write-once; write-protected by freeze_mutex */ + s64 __percpu *elem_count; }; static inline bool map_value_has_spin_lock(const struct bpf_map *map) @@ -1791,6 +1792,35 @@ bpf_map_alloc_percpu(const struct bpf_map *map, size_t size, size_t align, } #endif +static inline int +bpf_map_init_elem_count(struct bpf_map *map) +{ + size_t size = sizeof(*map->elem_count), align = size; + gfp_t flags = GFP_USER | __GFP_NOWARN; + + map->elem_count = bpf_map_alloc_percpu(map, size, align, flags); + if (!map->elem_count) + return -ENOMEM; + + return 0; +} + +static inline void +bpf_map_free_elem_count(struct bpf_map *map) +{ + free_percpu(map->elem_count); +} + +static inline void bpf_map_inc_elem_count(struct bpf_map *map) +{ + this_cpu_inc(*map->elem_count); +} + +static inline void bpf_map_dec_elem_count(struct bpf_map *map) +{ + this_cpu_dec(*map->elem_count); +} + extern int sysctl_unprivileged_bpf_disabled; static inline bool bpf_allow_ptr_leaks(void)