Re: [PATCH bpf-next 5/8] libbpf: support local function pointer relocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 4, 2021 at 5:54 PM Yonghong Song <yhs@xxxxxx> wrote:
>
> A new relocation RELO_LOCAL_FUNC is added to capture
> local (static) function pointers loaded with ld_imm64
> insns. Such ld_imm64 insns are marked with
> BPF_PSEUDO_FUNC and will be passed to kernel so
> kernel can replace them with proper actual jited
> func addresses.
>
> Signed-off-by: Yonghong Song <yhs@xxxxxx>
> ---
>  tools/lib/bpf/libbpf.c | 33 ++++++++++++++++++++++++++++++---
>  1 file changed, 30 insertions(+), 3 deletions(-)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 2abbc3800568..a5146c9e3e06 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -188,6 +188,7 @@ enum reloc_type {
>         RELO_CALL,
>         RELO_DATA,
>         RELO_EXTERN,
> +       RELO_LOCAL_FUNC,

libbpf internally is using SUBPROG notation. I think "LOCAL" part is
confusing, so I'd drop it. How about just RELO_SUBPROG? We can
separately refactor these names to distinguish RELO_CALL from the new
one. It would be more clear if RELO_CALL was called RELO_SUBPROG_CALL,
and the new one either RELO_SUBPROG_ADDR or RELO_SUBPROG_REF (as in
subprog reference)

>  };
>
>  struct reloc_desc {
> @@ -574,6 +575,12 @@ static bool insn_is_subprog_call(const struct bpf_insn *insn)
>                insn->off == 0;
>  }
>
> +static bool insn_is_pseudo_func(const struct bpf_insn *insn)
> +{
> +       return insn->code == (BPF_LD | BPF_IMM | BPF_DW) &&

there is is_ldimm64() function for this check (just move it up here,
it's a single-liner)

> +              insn->src_reg == BPF_PSEUDO_FUNC;
> +}
> +
>  static int
>  bpf_object__init_prog(struct bpf_object *obj, struct bpf_program *prog,
>                       const char *name, size_t sec_idx, const char *sec_name,
> @@ -3395,6 +3402,16 @@ static int bpf_program__record_reloc(struct bpf_program *prog,
>                 return 0;
>         }
>
> +       if (insn->code == (BPF_LD | BPF_IMM | BPF_DW) &&

just move this check below the next if that checks !is_ldimm64, no
need to do it here early.

> +           GELF_ST_BIND(sym->st_info) == STB_LOCAL &&
> +           GELF_ST_TYPE(sym->st_info) == STT_SECTION &&
> +           shdr_idx == obj->efile.text_shndx) {

see above how RELO_CALL is handled: shdr_idx != 0 check is missing. We
also validate that sym->st_value is multiple of BPF_INSN_SZ.

> +               reloc_desc->type = RELO_LOCAL_FUNC;
> +               reloc_desc->insn_idx = insn_idx;
> +               reloc_desc->sym_off = sym->st_value;
> +               return 0;
> +       }
> +
>         if (insn->code != (BPF_LD | BPF_IMM | BPF_DW)) {

feel free to use is_ldimm64 here as well, thanks!

>                 pr_warn("prog '%s': invalid relo against '%s' for insns[%d].code 0x%x\n",
>                         prog->name, sym_name, insn_idx, insn->code);
> @@ -6172,6 +6189,9 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
>                         }
>                         relo->processed = true;
>                         break;
> +               case RELO_LOCAL_FUNC:
> +                       insn[0].src_reg = BPF_PSEUDO_FUNC;
> +                       /* fallthrough */

fallthrough into an empty break clause seems a bit weird... just break
and leave the same comment as below?

>                 case RELO_CALL:
>                         /* will be handled as a follow up pass */
>                         break;
> @@ -6358,11 +6378,11 @@ bpf_object__reloc_code(struct bpf_object *obj, struct bpf_program *main_prog,
>
>         for (insn_idx = 0; insn_idx < prog->sec_insn_cnt; insn_idx++) {
>                 insn = &main_prog->insns[prog->sub_insn_off + insn_idx];
> -               if (!insn_is_subprog_call(insn))
> +               if (!insn_is_subprog_call(insn) && !insn_is_pseudo_func(insn))
>                         continue;
>
>                 relo = find_prog_insn_relo(prog, insn_idx);
> -               if (relo && relo->type != RELO_CALL) {
> +               if (relo && relo->type != RELO_CALL && relo->type != RELO_LOCAL_FUNC) {
>                         pr_warn("prog '%s': unexpected relo for insn #%zu, type %d\n",
>                                 prog->name, insn_idx, relo->type);
>                         return -LIBBPF_ERRNO__RELOC;
> @@ -6374,8 +6394,15 @@ bpf_object__reloc_code(struct bpf_object *obj, struct bpf_program *main_prog,
>                          * call always has imm = -1, but for static functions
>                          * relocation is against STT_SECTION and insn->imm
>                          * points to a start of a static function
> +                        *
> +                        * for local func relocation, the imm field encodes
> +                        * the byte offset in the corresponding section.
>                          */
> -                       sub_insn_idx = relo->sym_off / BPF_INSN_SZ + insn->imm + 1;
> +                       if (relo->type == RELO_CALL)
> +                               sub_insn_idx = relo->sym_off / BPF_INSN_SZ + insn->imm + 1;
> +                       else
> +                               sub_insn_idx = relo->sym_off / BPF_INSN_SZ +
> +                                              insn->imm / BPF_INSN_SZ + 1;

nit: keep it on a single line, it still fits within 100 characters and
is easier to visually compare to RELO_CALL case.

>                 } else {
>                         /* if subprogram call is to a static function within
>                          * the same ELF section, there won't be any relocation

don't we have to adjust insn->imm for this case as well? Let's add
selftests to make sure this works.

> --
> 2.24.1
>



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux