Re: [PATCH bpf-next 1/2] libbpf: add safer high-level wrappers for map operations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 12, 2022 at 12:12 PM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote:
>
> On 5/12/22 1:14 AM, Andrii Nakryiko wrote:
> > Add high-level API wrappers for most common and typical BPF map
> > operations that works directly on instances of struct bpf_map * (so you
> > don't have to call bpf_map__fd()) and validate key/value size
> > expectations.
> >
> > These helpers require users to specify key (and value, where
> > appropriate) sizes when performing lookup/update/delete/etc. This forces
> > user to actually think and validate (for themselves) those. This is
> > a good thing as user is expected by kernel to implicitly provide correct
> > key/value buffer sizes and kernel will just read/write necessary amount
> > of data. If it so happens that user doesn't set up buffers correctly
> > (which bit people for per-CPU maps especially) kernel either randomly
> > overwrites stack data or return -EFAULT, depending on user's luck and
> > circumstances. These high-level APIs are meant to prevent such
> > unpleasant and hard to debug bugs.
> >
> > This patch also adds bpf_map_delete_elem_flags() low-level API and
> > requires passing flags to bpf_map__delete_elem() API for consistency
> > across all similar APIs, even though currently kernel doesn't expect any
> > extra flags for BPF_MAP_DELETE_ELEM operation.
> >
> > List of map operations that get these high-level APIs:
> >    - bpf_map_lookup_elem;
> >    - bpf_map_update_elem;
> >    - bpf_map_delete_elem;
> >    - bpf_map_lookup_and_delete_elem;
> >    - bpf_map_get_next_key.
> >
> > Signed-off-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
> [...]
>
> (Looks like the set needs a rebase, just small comment below.)
>
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 4867a930628b..0ee3943aeaeb 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -9949,6 +9949,96 @@ bpf_object__find_map_by_offset(struct bpf_object *obj, size_t offset)
> >       return libbpf_err_ptr(-ENOTSUP);
> >   }
> >
> > +static int validate_map_op(const struct bpf_map *map, size_t key_sz,
> > +                        size_t value_sz, bool check_value_sz)
> > +{
> > +     if (map->fd <= 0)
> > +             return -ENOENT;
> > +     if (map->def.key_size != key_sz)
> > +             return -EINVAL;
> > +
> > +     if (!check_value_sz)
> > +             return 0;
> > +
> > +     switch (map->def.type) {
> > +     case BPF_MAP_TYPE_PERCPU_ARRAY:
> > +     case BPF_MAP_TYPE_PERCPU_HASH:
> > +     case BPF_MAP_TYPE_LRU_PERCPU_HASH:
> > +     case BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE:
> > +             if (value_sz != libbpf_num_possible_cpus() * roundup(map->def.value_size, 8))
> > +                     return -EINVAL;
> > +             break;
>
> I think this is fine, imho. My initial reaction would be that we should let
> kernel handle errors and not have some kind of additional gate in libbpf that
> would later on make it hard to debug/correlate where errors are coming from,
> but this one here is imho valid given we've seen hard to debug corruptions
> in the past, e.g. f3515b5d0b71 ("bpf: provide a generic macro for percpu values
> for selftests"), where otherwise no error is thrown but just corruption. Maybe
> the above grants a pr_warn() in addition to the -EINVAL. Other than that I
> think we should be very selective in terms of what we add into this here to
> avoid the mentioned err layers. Given it's user choice what API to use, the
> above is okay imho.
>

yep, can add pr_warn for size mismatches and provide expected vs
provided key/value size. Given this is not expected to happen in
correct production program, I'm not concerned with overhead of
pr_warn() for this. Will rebase and add this, thanks!

> > +     default:
> > +             if (map->def.value_size != value_sz)
> > +                     return -EINVAL;
> > +             break;
> > +     }
> > +     return 0;
> > +}
> > +
> > +int bpf_map__lookup_elem(const struct bpf_map *map,
> > +                      const void *key, size_t key_sz,
> > +                      void *value, size_t value_sz, __u64 flags)
> > +{
> > +     int err;
> > +
> > +     err = validate_map_op(map, key_sz, value_sz, true);
> > +     if (err)
> > +             return libbpf_err(err);
> > +
> > +     return bpf_map_lookup_elem_flags(map->fd, key, value, flags);
> > +}
> > +
> [...]



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux