Re: [PATCH v3 bpf-next] libbpf: Improve btf__add_btf() with an additional hashmap for strings.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 18, 2022 at 3:21 PM Kui-Feng Lee <kuifeng@xxxxxx> wrote:
>
> Add a hashmap to map the string offsets from a source btf to the
> string offsets from a target btf to reduce overheads.
>
> btf__add_btf() calls btf__add_str() to add strings from a source to a
> target btf.  It causes many string comparisons, and it is a major
> hotspot when adding a big btf.  btf__add_str() uses strcmp() to check
> if a hash entry is the right one.  The extra hashmap here compares
> offsets of strings, that are much cheaper.  It remembers the results
> of btf__add_str() for later uses to reduce the cost.
>
> We are parallelizing BTF encoding for pahole by creating separated btf
> instances for worker threads.  These per-thread btf instances will be
> added to the btf instance of the main thread by calling btf__add_str()
> to deduplicate and write out.  With this patch and -j4, the running
> time of pahole drops to about 6.0s from 6.6s.
>
> The following lines are the summary of 'perf stat' w/o the change.
>
>        6.668126396 seconds time elapsed
>
>       13.451054000 seconds user
>        0.715520000 seconds sys
>
> The following lines are the summary w/ the change.
>
>        5.986973919 seconds time elapsed
>
>       12.939903000 seconds user
>        0.724152000 seconds sys
>
> V3 removes an unnecssary check against str_off_map, and merges the
> declarations of two variables into one line.
>
> [v2] https://lore.kernel.org/bpf/20220114193713.461349-1-kuifeng@xxxxxx/
>
> Signed-off-by: Kui-Feng Lee <kuifeng@xxxxxx>
> ---
>  tools/lib/bpf/btf.c | 31 ++++++++++++++++++++++++++++++-
>  1 file changed, 30 insertions(+), 1 deletion(-)
>

[...]

> @@ -1680,6 +1697,9 @@ static int btf_rewrite_type_ids(__u32 *type_id, void *ctx)
>         return 0;
>  }
>
> +static size_t btf_dedup_identity_hash_fn(const void *key, void *ctx);
> +static bool btf_dedup_equal_fn(const void *k1, const void *k2, void *ctx);
> +
>  int btf__add_btf(struct btf *btf, const struct btf *src_btf)
>  {
>         struct btf_pipe p = { .src = src_btf, .dst = btf };
> @@ -1713,6 +1733,11 @@ int btf__add_btf(struct btf *btf, const struct btf *src_btf)
>         if (!off)
>                 return libbpf_err(-ENOMEM);
>
> +       /* Map the string offsets from src_btf to the offsets from btf to improve performance */
> +       p.str_off_map = hashmap__new(btf_dedup_identity_hash_fn, btf_dedup_equal_fn, NULL);
> +       if (p.str_off_map == NULL)

Sorry, I didn't catch this the first time. hashmap__new() returns
ERR_PTR() on error (it's an internal API so we use ERR_PTR() for
pointer-returning APIs), so you need to check for
IS_ERR(p.str_off_map) instead.

> +               return libbpf_err(-ENOMEM);
> +
>         /* bulk copy types data for all types from src_btf */
>         memcpy(t, src_btf->types_data, data_sz);
>
> @@ -1754,6 +1779,8 @@ int btf__add_btf(struct btf *btf, const struct btf *src_btf)
>         btf->hdr->str_off += data_sz;
>         btf->nr_types += cnt;
>
> +       hashmap__free(p.str_off_map);
> +
>         /* return type ID of the first added BTF type */
>         return btf->start_id + btf->nr_types - cnt;
>  err_out:
> @@ -1767,6 +1794,8 @@ int btf__add_btf(struct btf *btf, const struct btf *src_btf)
>          * wasn't modified, so doesn't need restoring, see big comment above */
>         btf->hdr->str_len = old_strs_len;
>
> +       hashmap__free(p.str_off_map);
> +
>         return libbpf_err(err);
>  }
>
> --
> 2.30.2
>



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux