Re: [PATCH bpf-next v4 1/3] libbpf: hashmap interface update to allow both long and void* keys/values

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2022-11-09 at 20:54 -0800, Andrii Nakryiko wrote:
> On Wed, Nov 9, 2022 at 6:26 AM Eduard Zingerman <eddyz87@xxxxxxxxx> wrote:
> > 
> > An update for libbpf's hashmap interface from void* -> void* to a
> > polymorphic one, allowing both long and void* keys and values.
> > 
> > This simplifies many use cases in libbpf as hashmaps there are mostly
> > integer to integer.
> > 
> > Perf copies hashmap implementation from libbpf and has to be
> > updated as well.
> > 
> > Changes to libbpf, selftests/bpf and perf are packed as a single
> > commit to avoid compilation issues with any future bisect.
> > 
> > Polymorphic interface is acheived by hiding hashmap interface
> > functions behind auxiliary macros that take care of necessary
> > type casts, for example:
> > 
> >     #define hashmap_cast_ptr(p)                                         \
> >         ({                                                              \
> >                 _Static_assert((p) == NULL || sizeof(*(p)) == sizeof(long),\
> >                                #p " pointee should be a long-sized integer or a pointer"); \
> >                 (long *)(p);                                            \
> >         })
> > 
> >     bool hashmap_find(const struct hashmap *map, long key, long *value);
> > 
> >     #define hashmap__find(map, key, value) \
> >                 hashmap_find((map), (long)(key), hashmap_cast_ptr(value))
> > 
> > - hashmap__find macro casts key and value parameters to long
> >   and long* respectively
> > - hashmap_cast_ptr ensures that value pointer points to a memory
> >   of appropriate size.
> > 
> > This hack was suggested by Andrii Nakryiko in [1].
> > This is a follow up for [2].
> > 
> > [1] https://lore.kernel.org/bpf/CAEf4BzZ8KFneEJxFAaNCCFPGqp20hSpS2aCj76uRk3-qZUH5xg@xxxxxxxxxxxxxx/
> > [2] https://lore.kernel.org/bpf/af1facf9-7bc8-8a3d-0db4-7b3f333589a2@xxxxxxxx/T/#m65b28f1d6d969fcd318b556db6a3ad499a42607d
> > 
> > Signed-off-by: Eduard Zingerman <eddyz87@xxxxxxxxx>
> > ---
> >  tools/bpf/bpftool/btf.c                       |  25 +--
> >  tools/bpf/bpftool/common.c                    |  10 +-
> >  tools/bpf/bpftool/gen.c                       |  19 +-
> >  tools/bpf/bpftool/link.c                      |   8 +-
> >  tools/bpf/bpftool/main.h                      |  14 +-
> >  tools/bpf/bpftool/map.c                       |   8 +-
> >  tools/bpf/bpftool/pids.c                      |  16 +-
> >  tools/bpf/bpftool/prog.c                      |   8 +-
> >  tools/lib/bpf/btf.c                           |  41 ++--
> >  tools/lib/bpf/btf_dump.c                      |  17 +-
> >  tools/lib/bpf/hashmap.c                       |  18 +-
> >  tools/lib/bpf/hashmap.h                       |  91 +++++----
> >  tools/lib/bpf/libbpf.c                        |  18 +-
> >  tools/lib/bpf/strset.c                        |  18 +-
> >  tools/lib/bpf/usdt.c                          |  29 ++-
> >  tools/perf/tests/expr.c                       |  28 +--
> >  tools/perf/tests/pmu-events.c                 |   6 +-
> >  tools/perf/util/bpf-loader.c                  |  11 +-
> >  tools/perf/util/evsel.c                       |   2 +-
> >  tools/perf/util/expr.c                        |  36 ++--
> >  tools/perf/util/hashmap.c                     |  18 +-
> >  tools/perf/util/hashmap.h                     |  91 +++++----
> >  tools/perf/util/metricgroup.c                 |  10 +-
> >  tools/perf/util/stat-shadow.c                 |   2 +-
> >  tools/perf/util/stat.c                        |   9 +-
> >  .../selftests/bpf/prog_tests/hashmap.c        | 190 +++++++++++++-----
> 
> would be better if you added new tests in separate patch and didn't
> use CHECK(), but oh well, we'll improve that some time in the future

For my reference, what's wrong with CHECK()?

> But regardless this is a pretty clear win, thanks a lot for working on
> this! I made a few pedantic changes mentioned below, and applied to
> bpf-next.

Sorry, missed a few casts :(

> 
> 
> >  .../bpf/prog_tests/kprobe_multi_test.c        |   6 +-
> >  27 files changed, 411 insertions(+), 338 deletions(-)
> > 
> 
> [...]
> 
> > @@ -545,7 +545,7 @@ void delete_pinned_obj_table(struct hashmap *map)
> >                 return;
> > 
> >         hashmap__for_each_entry(map, entry, bkt)
> > -               free(entry->value);
> > +               free((void *)entry->value);
> 
> entry->pvalue
> 
> > 
> >         hashmap__free(map);
> >  }
> 
> [...]
> 
> > @@ -309,8 +308,7 @@ static int show_link_close_plain(int fd, struct bpf_link_info *info)
> >         if (!hashmap__empty(link_table)) {
> >                 struct hashmap_entry *entry;
> > 
> > -               hashmap__for_each_key_entry(link_table, entry,
> > -                                           u32_as_hash_field(info->id))
> > +               hashmap__for_each_key_entry(link_table, entry, info->id)
> >                         printf("\n\tpinned %s", (char *)entry->value);
> 
> (char *)entry->pvalue for consistent use of pvalue
> 
> >         }
> >         emit_obj_refs_plain(refs_table, info->id, "\n\tpids ");
> 
> [...]
> 
> > @@ -595,8 +594,7 @@ static int show_map_close_plain(int fd, struct bpf_map_info *info)
> >         if (!hashmap__empty(map_table)) {
> >                 struct hashmap_entry *entry;
> > 
> > -               hashmap__for_each_key_entry(map_table, entry,
> > -                                           u32_as_hash_field(info->id))
> > +               hashmap__for_each_key_entry(map_table, entry, info->id)
> >                         printf("\n\tpinned %s", (char *)entry->value);
> 
> same, pvalue for consistency
> 
> >         }
> > 
> 
> [...]
> 
> > @@ -561,8 +560,7 @@ static void print_prog_plain(struct bpf_prog_info *info, int fd)
> >         if (!hashmap__empty(prog_table)) {
> >                 struct hashmap_entry *entry;
> > 
> > -               hashmap__for_each_key_entry(prog_table, entry,
> > -                                           u32_as_hash_field(info->id))
> > +               hashmap__for_each_key_entry(prog_table, entry, info->id)
> >                         printf("\n\tpinned %s", (char *)entry->value);
> 
> ditto
> 
> >         }
> > 
> 
> [...]
> 
> > @@ -1536,18 +1536,17 @@ static size_t btf_dump_name_dups(struct btf_dump *d, struct hashmap *name_map,
> >                                  const char *orig_name)
> >  {
> >         char *old_name, *new_name;
> > -       size_t dup_cnt = 0;
> > +       long dup_cnt = 0;
> 
> size_t is fine as is, right?
> 
> >         int err;
> > 
> >         new_name = strdup(orig_name);
> >         if (!new_name)
> >                 return 1;
> > 
> 
> [...]
> 
> > @@ -102,6 +122,13 @@ enum hashmap_insert_strategy {
> >         HASHMAP_APPEND,
> >  };
> > 
> > +#define hashmap_cast_ptr(p)                                            \
> > +       ({                                                              \
> > +               _Static_assert((p) == NULL || sizeof(*(p)) == sizeof(long),\
> > +                              #p " pointee should be a long-sized integer or a pointer"); \
> > +               (long *)(p);                                            \
> > +       })
> > +
> 
> I've reformatted this slightly, making it less indented to the right
> 
> >  /*
> >   * hashmap__insert() adds key/value entry w/ various semantics, depending on
> >   * provided strategy value. If a given key/value pair replaced already
> 
> [...]





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux