Re: [PATCH bpf-next v4 06/11] libbpf: Support kernel module function calls

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 20, 2021 at 7:15 AM Kumar Kartikeya Dwivedi
<memxor@xxxxxxxxx> wrote:
>
> This patch adds libbpf support for kernel module function call support.
> The fd_array parameter is used during BPF program load is used to pass
> module BTFs referenced by the program. insn->off is set to index into
> this array, but starts from 1, because insn->off as 0 is reserved for
> btf_vmlinux.
>
> We try to use existing insn->off for a module, since the kernel limits
> the maximum distinct module BTFs for kfuncs to 256, and also because
> index must never exceed the maximum allowed value that can fit in
> insn->off (INT16_MAX). In the future, if kernel interprets signed offset
> as unsigned for kfunc calls, this limit can be increased to UINT16_MAX.
>
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@xxxxxxxxx>
> ---
>  tools/lib/bpf/bpf.c             |  1 +
>  tools/lib/bpf/libbpf.c          | 58 +++++++++++++++++++++++++++++++--
>  tools/lib/bpf/libbpf_internal.h |  1 +
>  3 files changed, 57 insertions(+), 3 deletions(-)
>
> diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> index 2401fad090c5..7d1741ceaa32 100644
> --- a/tools/lib/bpf/bpf.c
> +++ b/tools/lib/bpf/bpf.c
> @@ -264,6 +264,7 @@ int libbpf__bpf_prog_load(const struct bpf_prog_load_params *load_attr)
>         attr.line_info_rec_size = load_attr->line_info_rec_size;
>         attr.line_info_cnt = load_attr->line_info_cnt;
>         attr.line_info = ptr_to_u64(load_attr->line_info);
> +       attr.fd_array = ptr_to_u64(load_attr->fd_array);
>
>         if (load_attr->name)
>                 memcpy(attr.prog_name, load_attr->name,
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index da65a1666a5e..3049dfc6088e 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -420,6 +420,12 @@ struct extern_desc {
>
>                         /* local btf_id of the ksym extern's type. */
>                         __u32 type_id;
> +                       /* offset to be patched in for insn->off,
> +                        * this is 0 for btf_vmlinux, and index + 1

What does "index + 1" mean here? Seems like kernel code is using the
offset as is, without any -1 compensation.

> +                        * for module BTF, where index is BTF index in
> +                        * obj->fd_array
> +                        */
> +                       __s16 offset;
>                 } ksym;
>         };
>  };
> @@ -516,6 +522,10 @@ struct bpf_object {
>         void *priv;
>         bpf_object_clear_priv_t clear_priv;
>
> +       int *fd_array;
> +       size_t fd_cap_cnt;
> +       int nr_fds;
> +
>         char path[];
>  };
>  #define obj_elf_valid(o)       ((o)->efile.elf)
> @@ -5357,6 +5367,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
>                         ext = &obj->externs[relo->sym_off];
>                         insn[0].src_reg = BPF_PSEUDO_KFUNC_CALL;
>                         insn[0].imm = ext->ksym.kernel_btf_id;
> +                       insn[0].off = ext->ksym.offset;
>                         break;
>                 case RELO_SUBPROG_ADDR:
>                         if (insn[0].src_reg != BPF_PSEUDO_FUNC) {
> @@ -6151,6 +6162,7 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
>         }
>         load_attr.log_level = prog->log_level;
>         load_attr.prog_flags = prog->prog_flags;
> +       load_attr.fd_array = prog->obj->fd_array;
>
>         if (prog->obj->gen_loader) {
>                 bpf_gen__prog_load(prog->obj->gen_loader, &load_attr,
> @@ -6763,9 +6775,46 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
>         }
>
>         if (kern_btf != obj->btf_vmlinux) {
> -               pr_warn("extern (func ksym) '%s': function in kernel module is not supported\n",
> -                       ext->name);
> -               return -ENOTSUP;
> +               int index = -1;
> +
> +               if (!obj->fd_array) {
> +                       obj->fd_array = calloc(8, sizeof(*obj->fd_array));
> +                       if (!obj->fd_array)
> +                               return -ENOMEM;
> +                       obj->fd_cap_cnt = 8;
> +                       /* index = 0 is for vmlinux BTF, so skip it */
> +                       obj->nr_fds = 1;
> +               }

this doesn't make sense, you use libbpf_ensure_mem() and shouldn't do
anything like this, it's all taken care of  already

> +
> +               for (int i = 0; i < obj->nr_fds; i++) {
> +                       if (obj->fd_array[i] == kern_btf_fd) {
> +                               index = i;
> +                               break;
> +                       }
> +               }

we can actually avoid all this. We already have a list of module BTFs
in bpf_object (obj->btf_modules), where we remember their id, fd, etc.
We can also remember their fd_arr_idx for quick lookup. Just teach
find_ksym_btf_id() to optionally return struct module_btf * and use
that to find/set idx. That seems cleaner and probably would be easier
in the future as well.

> +
> +               if (index == -1) {
> +                       if (obj->nr_fds == obj->fd_cap_cnt) {

don't check, libbpf_ensure_mem() handles that

> +                               ret = libbpf_ensure_mem((void **)&obj->fd_array,
> +                                                       &obj->fd_cap_cnt, sizeof(int),
> +                                                       obj->fd_cap_cnt + 1);
> +                               if (ret)
> +                                       return ret;
> +                       }
> +
> +                       index = obj->nr_fds;
> +                       obj->fd_array[obj->nr_fds++] = kern_btf_fd;
> +               }
> +
> +               if (index > INT16_MAX) {
> +                       /* insn->off is s16 */
> +                       pr_warn("extern (func ksym) '%s': module btf fd index too big\n",
> +                               ext->name);

can you log index value here as well? "module BTF FD index %d is too big\n"?

> +                       return -E2BIG;
> +               }
> +               ext->ksym.offset = index;

> +       } else {
> +               ext->ksym.offset = 0;
>         }

I think it will be cleaner if you move the entire offset determination
logic after all the other checks are performed and ext is mostly
populated. That will also make the logic shorter and simpler because
if you find kern_btf_fd match, you can exit early (or probably rather
goto to report the match and exit). Otherwise

>
>         kern_func = btf__type_by_id(kern_btf, kfunc_id);

this is actually extremely wasteful for module BTFs. Let's add
internal (at least for now) helper that will search only for "own" BTF
types in the BTF, skipping types in base BTF. Something like
btf_type_by_id_own()?

> @@ -6941,6 +6990,9 @@ int bpf_object__load_xattr(struct bpf_object_load_attr *attr)
>                         err = bpf_gen__finish(obj->gen_loader);
>         }
>
> +       /* clean up fd_array */
> +       zfree(&obj->fd_array);
> +
>         /* clean up module BTFs */
>         for (i = 0; i < obj->btf_module_cnt; i++) {
>                 close(obj->btf_modules[i].fd);
> diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
> index ceb0c98979bc..44b8f381b035 100644
> --- a/tools/lib/bpf/libbpf_internal.h
> +++ b/tools/lib/bpf/libbpf_internal.h
> @@ -291,6 +291,7 @@ struct bpf_prog_load_params {
>         __u32 log_level;
>         char *log_buf;
>         size_t log_buf_sz;
> +       int *fd_array;
>  };
>
>  int libbpf__bpf_prog_load(const struct bpf_prog_load_params *load_attr);
> --
> 2.33.0
>



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux