Re: [PATCH bpf-next 1/2] libbpf: __attribute__((btf_decl_tag("..."))) for btf dump in C format

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Nov 6, 2022 at 1:40 PM Eduard Zingerman <eddyz87@xxxxxxxxx> wrote:
>
> On Fri, 2022-11-04 at 13:54 -0700, Andrii Nakryiko wrote:
> > On Thu, Nov 3, 2022 at 6:45 AM Eduard Zingerman <eddyz87@xxxxxxxxx> wrote:
> > >
> > > Clang's `__attribute__((btf_decl_tag("...")))` is represented in BTF
> > > as a record of kind BTF_KIND_DECL_TAG with `type` field pointing to
> > > the type annotated with this attribute. This commit adds
> > > reconstitution of such attributes for BTF dump in C format.
> > >
> > > BTF doc says that BTF_KIND_DECL_TAGs should follow a target type but
> > > this is not enforced and tests don't honor this restriction.
> > > This commit uses hashmap to map types to the list of decl tags.
> > > The hashmap is filled by `btf_dump_assign_decl_tags` function called
> > > from `btf_dump__new`.
> > >
> > > It is assumed that total number of types annotated with decl tags is
> > > relatively small, thus some space is saved by using hashmap instead of
> > > adding a new field to `struct btf_dump_type_aux_state`.
> > >
> > > It is assumed that list of decl tags associated with a single type is
> > > small. Thus the list is represented by an array which grows linearly.
> > >
> > > To accommodate older Clang versions decl tags are dumped using the
> > > following macro:
> > >
> > >  #if __has_attribute(btf_decl_tag)
> > >  #  define __btf_decl_tag(x) __attribute__((btf_decl_tag(x)))
> > >  #else
> > >  #  define __btf_decl_tag(x)
> > >  #endif
> > >
> > > The macro definition is emitted upon first call to `btf_dump__dump_type`.
> > >
> > > Clang allows to attach btf_decl_tag attributes to the following kinds
> > > of items:
> > > - struct/union         supported
> > > - struct/union field   supported
> > > - typedef              supported
> > > - function             not applicable
> > > - function parameter   not applicable
> > > - variable             not applicable
> > >
> > > Signed-off-by: Eduard Zingerman <eddyz87@xxxxxxxxx>
> > > ---
> > >  tools/lib/bpf/btf_dump.c | 163 ++++++++++++++++++++++++++++++++++++++-
> > >  1 file changed, 160 insertions(+), 3 deletions(-)
> > >
> >

[...]

> > >  struct btf_dump *btf_dump__new(const struct btf *btf,
> > >                                btf_dump_printf_fn_t printf_fn,
> > > @@ -179,11 +211,21 @@ struct btf_dump *btf_dump__new(const struct btf *btf,
> > >                 d->ident_names = NULL;
> > >                 goto err;
> > >         }
> > > +       d->decl_tags = hashmap__new(identity_hash_fn, identity_equal_fn, NULL);
> > > +       if (IS_ERR(d->decl_tags)) {
> > > +               err = PTR_ERR(d->decl_tags);
> > > +               d->decl_tags = NULL;
> >
> > nit: no need to clear out ERR pointer, hashmap__free() handles that properly
>
> The `err` is passed to `libbpf_err_ptr` at the end of the function:
>
> struct btf_dump *btf_dump__new(...)
> {
>         ...
> err:
>         btf_dump__free(d);
>         return libbpf_err_ptr(err);
> }
>
> The `libbpf_err_ptr` uses it to update the `errno` global. So I think
> that PTR_ERR call is not redundant in this case.
>

I was talking about `d->decl_tags = NULL;`, you don't need to do that,
hashmap__free() handles such non-NULL error pointer just fine.


> >
> > > +               goto err;
> > > +       }
> > >
> > >         err = btf_dump_resize(d);
> > >         if (err)
> > >                 goto err;
> > >
> > > +       err = btf_dump_assign_decl_tags(d);
> > > +       if (err)
> > > +               goto err;
> > > +
> > >         return d;
> > >  err:
> > >         btf_dump__free(d);

[...]

> > >
> > > +/*
> > > + * This hashmap lookup is used in several places, so extract it as a
> > > + * function to hide all the ceremony with casts and NULL assignment.
> > > + */
> > > +static struct decl_tag_array *btf_dump_find_decl_tags(struct btf_dump *d, __u32 id)
> > > +{
> > > +       struct decl_tag_array *decl_tags = NULL;
> > > +
> > > +       hashmap__find(d->decl_tags, (void *)(uintptr_t)id, (void **)&decl_tags);
> > > +
> > > +       return decl_tags;
> > > +}
> > > +
> >
> > with your hashmap void * -> long refactoring this is not necessary,
> > though, right?
>
> If that refactoring is accepted the casts would go away, but it is
> still convenient for me to have a function returning pointer for the
> in the btf_dump_emit_typedef_def. I can inline it in all three call
> locations, but I think it is a bit cleaner this way.
>

I think

if (!hashmap__find(d->decl_tags, id, (void **)&decl_tags)) {
  /* handle NULL case */
}

is just as readable. Let's not add unnecessary helpers.

> >
> > > +/*
> > > + * Scans all BTF objects looking for BTF_KIND_DECL_TAG entries.
> > > + * The id's of the entries are stored in the `btf_dump.decl_tags` table,
> > > + * grouped by a target type.
> > > + */
> > > +static int btf_dump_assign_decl_tags(struct btf_dump *d)
> > > +{
> > > +       __u32 id, new_cnt, type_cnt = btf__type_cnt(d->btf);
> > > +       struct decl_tag_array *decl_tags;
> > > +       const struct btf_type *t;
> > > +       int err;
> > > +
> > > +       for (id = 1; id < type_cnt; id++) {
> > > +               t = btf__type_by_id(d->btf, id);
> > > +               if (!btf_is_decl_tag(t))
> > > +                       continue;
> > > +
> > > +               decl_tags = btf_dump_find_decl_tags(d, t->type);
> > > +               /* Assume small number of decl tags per id, increase array size by 1 */
> > > +               new_cnt = decl_tags ? decl_tags->cnt + 1 : 1;
> > > +               if (new_cnt > MAX_DECL_TAGS_PER_ID)
> > > +                       return -ERANGE;
> >
> > why artificial limitations? user will pay the price proportional to
> > its BTF, and we don't really care as the memory is allocated
> > dynamically anyway
>
> Since you requested to change allocation strategy from buffer doubling
> to +1 I figured that this point would get unusably slow for some large
> enough value. I'll remove this limitation.

allocators like jemalloc don't actually reallocate internally on every
realloc() call, they just adjust size if the value stays within the
same size bucket, so while you can micro-optimize to avoid unnecessary
calls to realloc() (but not necessarily reallocations themselves),
it's not that critical in practice, especially somewhere where we
don't expects many thousands of calls

>
> >
> > > +
> > > +               /* Allocate new_cnt + 1 to account for decl_tag_array header */
> > > +               decl_tags = libbpf_reallocarray(decl_tags, new_cnt + 1, sizeof(__u32));
> >

[...]



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux