Yafang Shao wrote: > Currently we can't get bpf memory usage reliably. bpftool now shows the > bpf memory footprint, which is difference with bpf memory usage. The > difference can be quite great between the footprint showed in bpftool > and the memory actually allocated by bpf in some cases, for example, > > - non-preallocated bpf map > The non-preallocated bpf map memory usage is dynamically changed. The > allocated elements count can be from 0 to the max entries. But the > memory footprint in bpftool only shows a fixed number. > - bpf metadata consumes more memory than bpf element > In some corner cases, the bpf metadata can consumes a lot more memory > than bpf element consumes. For example, it can happen when the element > size is quite small. Just following up slightly on previous comment. The metadata should be fixed and knowable correct? What I'm getting at is if this can be calculated directly instead of through a BPF helper and walking the entire map. > > We need a way to get the bpf memory usage especially there will be more > and more bpf programs running on the production environment and thus the > bpf memory usage is not trivial. In our environments we track map usage so we always know how many entries are in a map. I don't think we use this to calculate memory footprint at the moment, but just for map usage. Seems though once you have this calculating memory footprint can be done out of band because element and overheads costs are fixed. > > This patchset introduces a new map ops ->map_mem_usage to get the memory > usage. In this ops, the memory usage is got from the pointers which is > already allocated by a bpf map. To make the code simple, we igore some > small pointers as their size are quite small compared with the total > usage. > > In order to get the memory size from the pointers, some generic mm helpers > are introduced firstly, for example, percpu_size(), vsize() and kvsize(). > > This patchset only implements the bpf memory usage for hashtab. I will > extend it to other maps and bpf progs (bpf progs can dynamically allocate > memory via bpf_obj_new()) in the future. My preference would be to calculate this out of band. Walking a large map and doing it in a critical section to get the memory usage seems not optimal > > The detailed result can be found in patch #7. > > Patch #1~#4: Generic mm helpers > Patch #5 : Introduce new ops > Patch #6 : Helpers for bpf_mem_alloc > Patch #7 : hashtab memory usage > > Future works: > - extend it to other maps > - extend it to bpf prog > - per-container bpf memory usage > > Historical discussions, > - RFC PATCH v1 mm, bpf: Add BPF into /proc/meminfo > https://lwn.net/Articles/917647/ > - RFC PATCH v2 mm, bpf: Add BPF into /proc/meminfo > https://lwn.net/Articles/919848/ > > Yafang Shao (7): > mm: percpu: fix incorrect size in pcpu_obj_full_size() > mm: percpu: introduce percpu_size() > mm: vmalloc: introduce vsize() > mm: util: introduce kvsize() > bpf: add new map ops ->map_mem_usage > bpf: introduce bpf_mem_alloc_size() > bpf: hashtab memory usage > > include/linux/bpf.h | 2 ++ > include/linux/bpf_mem_alloc.h | 2 ++ > include/linux/percpu.h | 1 + > include/linux/slab.h | 1 + > include/linux/vmalloc.h | 1 + > kernel/bpf/hashtab.c | 80 ++++++++++++++++++++++++++++++++++++++++++- > kernel/bpf/memalloc.c | 70 +++++++++++++++++++++++++++++++++++++ > kernel/bpf/syscall.c | 18 ++++++---- > mm/percpu-internal.h | 4 ++- > mm/percpu.c | 35 +++++++++++++++++++ > mm/util.c | 15 ++++++++ > mm/vmalloc.c | 17 +++++++++ > 12 files changed, 237 insertions(+), 9 deletions(-) > > -- > 1.8.3.1 >