Re: [RFC PATCH bpf-next v2 00/11] mm, bpf: Add BPF into /proc/meminfo

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 30, 2023 at 9:14 PM Uladzislau Rezki <urezki@xxxxxxxxx> wrote:
>
> On Sat, Jan 28, 2023 at 07:49:08PM +0800, Yafang Shao wrote:
> > On Thu, Jan 26, 2023 at 1:45 PM Alexei Starovoitov
> > <alexei.starovoitov@xxxxxxxxx> wrote:
> > >
> > > On Tue, Jan 17, 2023 at 10:49 PM Yafang Shao <laoar.shao@xxxxxxxxx> wrote:
> > > > > > I just don't want to add many if-elses or switch-cases into
> > > > > > bpf_map_memory_footprint(), because I think it is a little ugly.
> > > > > > Introducing a new map ops could make it more clear.  For example,
> > > > > > static unsigned long bpf_map_memory_footprint(const struct bpf_map *map)
> > > > > > {
> > > > > >     unsigned long size;
> > > > > >
> > > > > >     if (map->ops->map_mem_footprint)
> > > > > >         return map->ops->map_mem_footprint(map);
> > > > > >
> > > > > >     size = round_up(map->key_size + bpf_map_value_size(map), 8);
> > > > > >     return round_up(map->max_entries * size, PAGE_SIZE);
> > > > > > }
> > > > >
> > > > > It is also ugly, because bpf_map_value_size() already has if-stmt.
> > > > > I prefer to keep all estimates in one place.
> > > > > There is no need to be 100% accurate.
> > > >
> > > > Per my investigation, it can be almost accurate with little effort.
> > > > Take the htab for example,
> > > > static unsigned long htab_mem_footprint(const struct bpf_map *map)
> > > > {
> > > >     struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
> > > >     unsigned long size = 0;
> > > >
> > > >     if (!htab_is_prealloc(htab)) {
> > > >         size += htab_elements_size(htab);
> > > >     }
> > > >     size += kvsize(htab->elems);
> > > >     size += percpu_size(htab->extra_elems);
> > > >     size += kvsize(htab->buckets);
> > > >     size += bpf_mem_alloc_size(&htab->pcpu_ma);
> > > >     size += bpf_mem_alloc_size(&htab->ma);
> > > >     if (htab->use_percpu_counter)
> > > >         size += percpu_size(htab->pcount.counters);
> > > >     size += percpu_size(htab->map_locked[i]) * HASHTAB_MAP_LOCK_COUNT;
> > > >     size += kvsize(htab);
> > > >     return size;
> > > > }
> > >
> > > Please don't.
> > > Above doesn't look maintainable.
> >
> > It is similar to htab_map_free(). These pointers are the pointers
> > which will be freed in map_free().
> > We just need to keep map_mem_footprint() in sync with map_free(). It
> > won't be a problem for maintenance.
> >
> > > Look at kvsize(htab). Do you really care about hundred bytes?
> > > Just accept that there will be a small constant difference
> > > between what show_fdinfo reports and the real memory.
> >
> > The point is we don't have a clear idea what the margin is.
> >
> > > You cannot make it 100%.
> > > There is kfence that will allocate 4k though you asked kmalloc(8).
> > >
> >
> > We already have ksize()[1], which covers the kfence.
> >
> > [1]. https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/tree/mm/slab_common.c#n1431
> >
> > > > We just need to get the real memory size from the pointer instead of
> > > > calculating the size again.
> > > > For non-preallocated htab, it is a little trouble to get the element
> > > > size (not the unit_size), but it won't be a big deal.
> > >
> > > You'd have to convince mm folks that kvsize() is worth doing.
> > > I don't think it will be easy.
> > >
> >
> > As I mentioned above, we already have ksize(), so we only need to
> > introduce vsize().  Per my understanding, we can simply use
> > vm_struct->size to get the vmalloc size, see also the patch #5 in this
> > patchset[2].
> >
> > Andrew, Uladzislau, Christoph,  do you have any comments on this newly
> > introduced vsize()[2] ?
> >
> > [2]. https://lore.kernel.org/bpf/20230112155326.26902-6-laoar.shao@xxxxxxxxx/
> >
> <snip>
> +/* Report full size of underlying allocation of a vmalloc'ed addr */
> +static inline size_t vsize(const void *addr)
> +{
> +       struct vm_struct *area;
> +
> +       if (!addr)
> +               return 0;
> +
> +       area = find_vm_area(addr);
> +       if (unlikely(!area))
> +               return 0;
> +
> +       return area->size;
> +}
> <snip>
>
> You can not access area after the lock is dropped. We do not have any
> ref counters for VA objects. Therefore it should be done like below:
>
>
> <snip>
>   spin_lock(&vmap_area_lock);
>   va = __find_vmap_area(addr, &vmap_area_root);
>   if (va && va->vm)
>     va_size = va->vm->size;
>   spin_unlock(&vmap_area_lock);
>
>   return va_size;
> <snip>
>

Ah, it should take this global lock.  I missed that.
Many thanks for the detailed explanation.

-- 
Regards
Yafang



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux