Re: [PATCH v3 bpf-next 06/13] bpf: Add bpf_core_add_cands() and wire it into bpf_core_apply_relo_insn().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 22, 2021 at 3:47 PM Andrii Nakryiko
<andrii.nakryiko@xxxxxxxxx> wrote:
>
> On Fri, Nov 19, 2021 at 7:33 PM Alexei Starovoitov
> <alexei.starovoitov@xxxxxxxxx> wrote:
> >
> > From: Alexei Starovoitov <ast@xxxxxxxxxx>
> >
> > Given BPF program's BTF perform a linear search through kernel BTFs for
> > a possible candidate.
> > Then wire the result into bpf_core_apply_relo_insn().
> >
> > Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx>
> > ---
> >  kernel/bpf/btf.c | 136 ++++++++++++++++++++++++++++++++++++++++++++++-
> >  1 file changed, 135 insertions(+), 1 deletion(-)
> >
>
> [...]
>
> >  int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo,
> >                    int relo_idx, void *insn)
> >  {
> > -       return -EOPNOTSUPP;
> > +       if (relo->kind != BPF_CORE_TYPE_ID_LOCAL) {
> > +               struct bpf_core_cand_list *cands;
> > +
> > +               cands = bpf_core_find_cands(ctx, relo->type_id);
>
> this is wrong for many reasons:
>
> 1. you will overwrite previous ctx->cands, if it was already set,
> which leaks memory
> 2. this list of candidates should be keyed by relo->type_id ("root
> type"). Different root types get their own independent lists; so it
> has to be some sort of look up table from type_id to a list of
> candidates.
>
> 2) means that if you had a bunch of relos against struct task_struct,
> you'll crate a list of candidates when processing first relo that
> starts at task_struct. All the subsequent relos that have task_struct
> as root type will re-used that list and potentially trim it down. If
> there are some other relos against, say, struct mm_struct, they will
> have their independent list of candidates.

right.
Your prior comment confused me. I didn't do this reuse of cands
to avoid introducing hashtable here like libbpf does,
since it does too little to actually help.
I think I will go back to the prior version: linear search for every relo.
If we actually need to optimize this part of loading
we better do persistent cache of
name -> kernel btf_type-s
and reuse it across different programs.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux