On Wed, Jan 18, 2023 at 1:25 AM Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> wrote: > > On Fri, Jan 13, 2023 at 3:53 AM Yafang Shao <laoar.shao@xxxxxxxxx> wrote: > > > > On Fri, Jan 13, 2023 at 5:05 AM Alexei Starovoitov > > <alexei.starovoitov@xxxxxxxxx> wrote: > > > > > > On Thu, Jan 12, 2023 at 7:53 AM Yafang Shao <laoar.shao@xxxxxxxxx> wrote: > > > > > > > > Currently there's no way to get BPF memory usage, while we can only > > > > estimate the usage by bpftool or memcg, both of which are not reliable. > > > > > > > > - bpftool > > > > `bpftool {map,prog} show` can show us the memlock of each map and > > > > prog, but the memlock is vary from the real memory size. The memlock > > > > of a bpf object is approximately > > > > `round_up(key_size + value_size, 8) * max_entries`, > > > > so 1) it can't apply to the non-preallocated bpf map which may > > > > increase or decrease the real memory size dynamically. 2) the element > > > > size of some bpf map is not `key_size + value_size`, for example the > > > > element size of htab is > > > > `sizeof(struct htab_elem) + round_up(key_size, 8) + round_up(value_size, 8)` > > > > That said the differece between these two values may be very great if > > > > the key_size and value_size is small. For example in my verifaction, > > > > the size of memlock and real memory of a preallocated hash map are, > > > > > > > > $ grep BPF /proc/meminfo > > > > BPF: 350 kB <<< the size of preallocated memalloc pool > > > > > > > > (create hash map) > > > > > > > > $ bpftool map show > > > > 41549: hash name count_map flags 0x0 > > > > key 4B value 4B max_entries 1048576 memlock 8388608B > > > > > > > > $ grep BPF /proc/meminfo > > > > BPF: 82284 kB > > > > > > > > So the real memory size is $((82284 - 350)) which is 81934 kB > > > > while the memlock is only 8192 kB. > > > > > > hashmap with key 4b and value 4b looks artificial to me, > > > but since you're concerned with accuracy of bpftool reporting, > > > please fix the estimation in bpf_map_memory_footprint(). > > > > I thought bpf_map_memory_footprint() was deprecated, so I didn't try > > to fix it before. > > It's not deprecated. It's trying to be accurate. > See bpf_map_value_size(). > In the past we had to be precise when we calculated the required memory > before we allocated and that was causing ongoing maintenance issues. > Now bpf_map_memory_footprint() is an estimate for show_fdinfo. > It can be made more accurate for this map with corner case key/value sizes. > Thanks for the clarification. > > > You're correct that: > > > > > > > size of some bpf map is not `key_size + value_size`, for example the > > > > element size of htab is > > > > `sizeof(struct htab_elem) + round_up(key_size, 8) + round_up(value_size, 8)` > > > > > > So just teach bpf_map_memory_footprint() to do this more accurately. > > > Add bucket size to it as well. > > > Make it even more accurate with prealloc vs not. > > > Much simpler change than adding run-time overhead to every alloc/free > > > on bpf side. > > > > > > > It seems that we'd better introduce ->memory_footprint for some > > specific bpf maps. I will think about it. > > No. Don't build it into a replica of what we had before. > Making existing bpf_map_memory_footprint() more accurate. > I just don't want to add many if-elses or switch-cases into bpf_map_memory_footprint(), because I think it is a little ugly. Introducing a new map ops could make it more clear. For example, static unsigned long bpf_map_memory_footprint(const struct bpf_map *map) { unsigned long size; if (map->ops->map_mem_footprint) return map->ops->map_mem_footprint(map); size = round_up(map->key_size + bpf_map_value_size(map), 8); return round_up(map->max_entries * size, PAGE_SIZE); } > > > bpf side tracks all of its allocation. There is no need to do that > > > in generic mm side. > > > Exposing an aggregated single number if /proc/meminfo also looks wrong. > > > > Do you mean that we shouldn't expose it in /proc/meminfo ? > > We should not because it helps one particular use case only. > Somebody else might want map mem info per container, > then somebody would need it per user, etc. It seems we should show memcg info and user info in bpftool map show. > bpftool map show | awk > solves all those cases without adding new uapi-s. Makes sense to me. -- Regards Yafang