On Mon, Nov 22, 2021 at 4:04 PM Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> wrote: > > On Fri, Nov 19, 2021 at 7:33 PM Alexei Starovoitov > <alexei.starovoitov@xxxxxxxxx> wrote: > > > > From: Alexei Starovoitov <ast@xxxxxxxxxx> > > > > Without lskel the CO-RE relocations are processed by libbpf before any other > > work is done. Instead, when lskel is needed, remember relocation as RELO_CORE > > kind. Then when loader prog is generated for a given bpf program pass CO-RE > > relos of that program to gen loader via bpf_gen__record_relo_core(). The gen > > loader will remember them as-is and pass it later as-is into the kernel. > > > > The normal libbpf flow is to process CO-RE early before call relos happen. In > > case of gen_loader the core relos have to be added to other relos to be copied > > together when bpf static function is appended in different places to other main > > bpf progs. During the copy the append_subprog_relos() will adjust insn_idx for > > normal relos and for RELO_CORE kind too. When that is done each struct > > reloc_desc has good relos for specific main prog. > > > > Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> > > --- > > tools/lib/bpf/bpf_gen_internal.h | 3 + > > tools/lib/bpf/gen_loader.c | 41 +++++++++++- > > tools/lib/bpf/libbpf.c | 108 ++++++++++++++++++++++--------- > > 3 files changed, 119 insertions(+), 33 deletions(-) > > > > [...] > > > if (relo->kind != BPF_CORE_TYPE_ID_LOCAL && > > @@ -5653,6 +5679,9 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog) > > case RELO_CALL: > > /* handled already */ > > break; > > + case RELO_CORE: > > + /* will be handled by bpf_program_record_relos() */ > > + break; > > default: > > pr_warn("prog '%s': relo #%d: bad relo type %d\n", > > prog->name, i, relo->type); > > @@ -6090,6 +6119,35 @@ bpf_object__free_relocs(struct bpf_object *obj) > > } > > } > > > > +static int cmp_relocs(const void *_a, const void *_b) > > +{ > > + const struct reloc_desc *a = _a; > > + const struct reloc_desc *b = _b; > > + > > + if (a->insn_idx != b->insn_idx) > > + return a->insn_idx < b->insn_idx ? -1 : 1; > > + > > + /* no two relocations should have the same insn_idx, but ... */ > > + if (a->type != b->type) > > + return a->type < b->type ? -1 : 1; > > + > > + return 0; > > +} > > + > > +static void bpf_object__sort_relos(struct bpf_object *obj) > > +{ > > + int i; > > + > > + for (i = 0; i < obj->nr_programs; i++) { > > + struct bpf_program *p = &obj->programs[i]; > > + > > + if (!p->nr_reloc) > > + continue; > > + > > + qsort(p->reloc_desc, p->nr_reloc, sizeof(*p->reloc_desc), cmp_relocs); > > + } > > +} > > + > > static int > > bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path) > > { > > @@ -6104,6 +6162,8 @@ bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path) > > err); > > return err; > > } > > + if (obj->gen_loader) > > + bpf_object__sort_relos(obj); > > libbpf sorts relos because it does binary search on them (see > find_prog_insn_relo). exactly. After co-re relos were added the array has to be sorted again. find_prog_insn_relo() will be called after this step.