Re: [PATCH bpf-next v6 5/7] bpftool: Implement btfgen_get_btf()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 11, 2022 at 7:42 PM Andrii Nakryiko
<andrii.nakryiko@xxxxxxxxx> wrote:
>
> On Wed, Feb 9, 2022 at 2:27 PM Mauricio Vásquez <mauricio@xxxxxxxxxx> wrote:
> >
> > The last part of the BTFGen algorithm is to create a new BTF object with
> > all the types that were recorded in the previous steps.
> >
> > This function performs two different steps:
> > 1. Add the types to the new BTF object by using btf__add_type(). Some
> > special logic around struct and unions is implemented to only add the
> > members that are really used in the field-based relocations. The type
> > ID on the new and old BTF objects is stored on a map.
> > 2. Fix all the type IDs on the new BTF object by using the IDs saved in
> > the previous step.
> >
> > Signed-off-by: Mauricio Vásquez <mauricio@xxxxxxxxxx>
> > Signed-off-by: Rafael David Tinoco <rafael.tinoco@xxxxxxxxxxx>
> > Signed-off-by: Lorenzo Fontana <lorenzo.fontana@xxxxxxxxxx>
> > Signed-off-by: Leonardo Di Donato <leonardo.didonato@xxxxxxxxxx>
> > ---
> >  tools/bpf/bpftool/gen.c | 136 +++++++++++++++++++++++++++++++++++++++-
> >  1 file changed, 135 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
> > index c3e34db2ec8a..1efc7f3c64b2 100644
> > --- a/tools/bpf/bpftool/gen.c
> > +++ b/tools/bpf/bpftool/gen.c
> > @@ -1481,10 +1481,144 @@ static int btfgen_record_obj(struct btfgen_info *info, const char *obj_path)
> >         return err;
> >  }
> >
> > +static unsigned int btfgen_get_id(struct hashmap *ids, unsigned int old)
> > +{
> > +       uintptr_t new;
> > +
> > +       if (!hashmap__find(ids, uint_as_hash_key(old), (void **)&new))
> > +               /* return id for BTF_KIND_VOID as it's possible that the
> > +                * ID we're looking for is the type of a pointer that
> > +                * we're not adding.
> > +                */
> > +               return 0;
> > +
> > +       return (unsigned int)(uintptr_t)new;
> > +}
> > +
> > +static int btfgen_add_id(struct hashmap *ids, unsigned int old, unsigned int new)
> > +{
> > +       return hashmap__add(ids, uint_as_hash_key(old), uint_as_hash_key(new));
> > +}
> > +
> > +static int btfgen_remap_id(__u32 *type_id, void *ctx)
> > +{
> > +       struct hashmap *ids = ctx;
> > +
> > +       *type_id = btfgen_get_id(ids, *type_id);
> > +
> > +       return 0;
> > +}
> > +
> >  /* Generate BTF from relocation information previously recorded */
> >  static struct btf *btfgen_get_btf(struct btfgen_info *info)
> >  {
> > -       return ERR_PTR(-EOPNOTSUPP);
> > +       struct btf *btf_new = NULL;
> > +       struct hashmap *ids = NULL;
> > +       unsigned int i;
> > +       int err = 0;
> > +
> > +       btf_new = btf__new_empty();
> > +       if (!btf_new) {
> > +               err = -errno;
> > +               goto err_out;
> > +       }
> > +
> > +       ids = hashmap__new(btfgen_hash_fn, btfgen_equal_fn, NULL);
> > +       if (IS_ERR(ids)) {
> > +               err = PTR_ERR(ids);
> > +               goto err_out;
> > +       }
> > +
> > +       /* first pass: add all marked types to btf_new and add their new ids to the ids map */
> > +       for (i = 1; i < btf__type_cnt(info->marked_btf); i++) {
>
> small nit: why keep calling btf__type_cnt() on each iteration? store
> it as n = btf__type_cnt(...) and do i < n ?

Fixed

>
> > +               const struct btf_type *cloned_type, *btf_type;
> > +               int new_id;
> > +
> > +               cloned_type = btf__type_by_id(info->marked_btf, i);
> > +
> > +               if (cloned_type->name_off != MARKED)
> > +                       continue;
>
> see, if you did
>
> #define MARKED (1<<31)
>
> and did
>
> t->name_off |= MARKED
>
> everywhere, then you wouldn't need src_btf anymore, as you'd just
> restore original name_off right here with t->name_off &= ~MARKED.
>
> But it's fine, just wanted to point out why I wanted to use one bit,
> so that original values are still available.

I see, thanks for the explanation. In both cases a BTF copy to pass to
libbpf is needed, hence I'd say there's not that much difference.

>
> > +
> > +               btf_type = btf__type_by_id(info->src_btf, i);
> > +
> > +               /* add members for struct and union */
> > +               if (btf_is_struct(btf_type) || btf_is_union(btf_type)) {
>
> btf_is_composite(btf_type)
>
> > +                       struct btf_type *btf_type_cpy;
> > +                       int nmembers = 0, idx_dst, idx_src;
> > +                       size_t new_size;
> > +
> > +                       /* calculate nmembers */
> > +                       for (idx_src = 0; idx_src < btf_vlen(cloned_type); idx_src++) {
> > +                               struct btf_member *cloned_m = btf_members(cloned_type) + idx_src;
>
> a bit nicer pattern is:
>
>
> struct btf_member *m = btf_members(cloned_type);
> int vlen = btf_vlen(cloned_type)
>
> for (i = 0; i < vlen; i++, m++) {
> }
>
> That way you don't have to re-calculate member
>

Reworked the code with the other suggestions below.

> > +
> > +                               if (cloned_m->name_off == MARKED)
> > +                                       nmembers++;
> > +                       }
> > +
> > +                       new_size = sizeof(struct btf_type) + nmembers * sizeof(struct btf_member);
> > +
> > +                       btf_type_cpy = malloc(new_size);
> > +                       if (!btf_type_cpy)
> > +                               goto err_out;
> > +
> > +                       /* copy btf type */
> > +                       *btf_type_cpy = *btf_type;
> > +
> > +                       idx_dst = 0;
> > +                       for (idx_src = 0; idx_src < btf_vlen(cloned_type); idx_src++) {
> > +                               struct btf_member *btf_member_src, *btf_member_dst;
> > +                               struct btf_member *cloned_m = btf_members(cloned_type) + idx_src;
> > +
> > +                               /* copy only members that are marked as used */
> > +                               if (cloned_m->name_off != MARKED)
> > +                                       continue;
> > +
> > +                               btf_member_src = btf_members(btf_type) + idx_src;
> > +                               btf_member_dst = btf_members(btf_type_cpy) + idx_dst;
> > +
> > +                               *btf_member_dst = *btf_member_src;
> > +
> > +                               idx_dst++;
> > +                       }
> > +
> > +                       /* set new vlen */
> > +                       btf_type_cpy->info = btf_type_info(btf_kind(btf_type_cpy), nmembers,
> > +                                                          btf_kflag(btf_type_cpy));
> > +
> > +                       err = btf__add_type(btf_new, info->src_btf, btf_type_cpy);
> > +                       free(btf_type_cpy);
>
> hmm.. this malloc and the rest still feels clunky... why not do it
> explicitly with btf__add_struct()/btf__add_union() and then
> btf__add_field() for each marked field? You also won't need to
> pre-calculate the number of members (libbpf will adjust number of
> members automatically, it's pretty nice API, try it).
>

You're right. Code looks better with this API.

> You can also use err = err ?: btf__add_xxx() pattern to minimize error
> handling conditionals
>

mmm, I didn't find a place where it could improve the code in this case.

>
>
> > +               } else {
> > +                       err = btf__add_type(btf_new, info->src_btf, btf_type);
> > +               }
> > +
> > +               if (err < 0)
> > +                       goto err_out;
> > +
> > +               new_id = err;
> > +
> > +               /* add ID mapping */
> > +               err = btfgen_add_id(ids, i, new_id);
>
> Why using clunky hashmap API if we are talking about mapping
> sequential integers? Just allocate an array of btf__type_cnt()
> integers and use that as a straightforward map?
>

Makes sense. Probably a hashmap will use a bit less memory but I think
the readability improvement is worth it.


>
> > +               if (err)
> > +                       goto err_out;
> > +       }
> > +
> > +       /* second pass: fix up type ids */
> > +       for (i = 1; i < btf__type_cnt(btf_new); i++) {
> > +               struct btf_type *btf_type = (struct btf_type *) btf__type_by_id(btf_new, i);
> > +
> > +               err = btf_type_visit_type_ids(btf_type, btfgen_remap_id, ids);
> > +               if (err)
> > +                       goto err_out;
> > +       }
> > +
> > +       hashmap__free(ids);
> > +       return btf_new;
> > +
> > +err_out:
> > +       btf__free(btf_new);
> > +       hashmap__free(ids);
> > +       errno = -err;
> > +       return NULL;
> >  }
> >
> >  /* Create minimized BTF file for a set of BPF objects.
> > --
> > 2.25.1
> >




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux