Re: [PATCH bpf-next 2/5] libbpf: Prepare light skeleton for the kernel.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 4, 2022 at 3:17 PM Alexei Starovoitov
<alexei.starovoitov@xxxxxxxxx> wrote:
>
> From: Alexei Starovoitov <ast@xxxxxxxxxx>
>
> Prepare light skeleton to be used in the kernel module and in the user space.
> The look and feel of lskel.h is mostly the same with the difference that for
> user space the skel->rodata is the same pointer before and after skel_load
> operation, while in the kernel the skel->rodata after skel_open and the
> skel->rodata after skel_load are different pointers.
> Typical usage of skeleton remains the same for kernel and user space:
> skel = my_bpf__open();
> skel->rodata->my_global_var = init_val;
> err = my_bpf__load(skel);
> err = my_bpf__attach(skel);
> // access skel->rodata->my_global_var;
> // access skel->bss->another_var;
>
> Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx>
> ---
>  tools/lib/bpf/skel_internal.h | 168 +++++++++++++++++++++++++++++++---
>  1 file changed, 153 insertions(+), 15 deletions(-)
>
> diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h
> index dcd3336512d4..4d99ef8cbbba 100644
> --- a/tools/lib/bpf/skel_internal.h
> +++ b/tools/lib/bpf/skel_internal.h
> @@ -3,9 +3,19 @@
>  #ifndef __SKEL_INTERNAL_H
>  #define __SKEL_INTERNAL_H
>
> +#ifdef __KERNEL__
> +#include <linux/fdtable.h>
> +#include <linux/mm.h>
> +#include <linux/mman.h>
> +#include <linux/slab.h>
> +#include <linux/bpf.h>
> +#else
>  #include <unistd.h>
>  #include <sys/syscall.h>
>  #include <sys/mman.h>
> +#include <stdlib.h>
> +#include "bpf.h"
> +#endif
>
>  #ifndef __NR_bpf
>  # if defined(__mips__) && defined(_ABIO32)
> @@ -25,16 +35,12 @@
>   * requested during loader program generation.
>   */
>  struct bpf_map_desc {
> -       union {
> -               /* input for the loader prog */
> -               struct {
> -                       __aligned_u64 initial_value;
> -                       __u32 max_entries;
> -               };
> +       struct {

Is this anonymous struct still needed?

>                 /* output of the loader prog */
> -               struct {
> -                       int map_fd;
> -               };
> +               int map_fd;
> +               /* input for the loader prog */
> +               __u32 max_entries;
> +               __aligned_u64 initial_value;
>         };
>  };
>  struct bpf_prog_desc {
> @@ -57,11 +63,135 @@ struct bpf_load_and_run_opts {
>         const char *errstr;
>  };
>
> +long bpf_sys_bpf(__u32 cmd, void *attr, __u32 attr_size);
> +
>  static inline int skel_sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr,
>                           unsigned int size)
>  {
> +#ifdef __KERNEL__
> +       return bpf_sys_bpf(cmd, attr, size);
> +#else
>         return syscall(__NR_bpf, cmd, attr, size);
> +#endif
> +}
> +
> +#ifdef __KERNEL__
> +static inline int close(int fd)
> +{
> +       return close_fd(fd);
> +}
> +static inline void *skel_alloc(size_t size)
> +{
> +       return kcalloc(1, size, GFP_KERNEL);
> +}
> +static inline void skel_free(const void *p)
> +{
> +       kfree(p);
> +}

any reason to skim on empty lines between functions? The rest of this
file (and libbpf code in general) feels very different in terms of
spacing.

> +static inline void skel_free_map_data(void *p, __u64 addr, size_t sz)
> +{
> +       if (addr && addr != ~0ULL)
> +               vm_munmap(addr, sz);
> +       if (addr != ~0ULL)
> +               kvfree(p);

minor nit: a small comment explaining that we set addr to ~0ULL on
error (but we still call skel_free_map_data) would help a bit here

> +}
> +/* skel->bss/rodata maps are populated in three steps.
> + *
> + * For kernel use:
> + * skel_prep_map_data() allocates kernel memory that kernel module can directly access.
> + * skel_prep_init_value() allocates a region in user space process and copies
> + * potentially modified initial map value into it.
> + * The loader program will perform copy_from_user() from maps.rodata.initial_value.

I'm missing something here. If a light skeleton is used from a kernel
module, then this initialization data is also pointing to kernel
module memory, no? So why the copy_from_user() then?

Also this vm_mmap() and vm_munmap(), is it necessary for in-kernel
skeleton itself, or it's required so that if some user-space process
would fetch that BPF map by ID and tried to mmap() its content it
would be possible? Otherwise it's not clear, as kernel module can
access BPF array's value pointer directly anyways, so why the mmaping?


> + * skel_finalize_map_data() sets skel->rodata to point to actual value in a bpf map.
> + *
> + * For user space:
> + * skel_prep_map_data() mmaps anon memory into skel->rodata that can be accessed directly.
> + * skel_prep_init_value() copies rodata pointer into map.rodata.initial_value.
> + * The loader program will perform copy_from_user() from maps.rodata.initial_value.
> + * skel_finalize_map_data() remaps bpf array map value from the kernel memory into
> + * skel->rodata address.
> + */
> +static inline void *skel_prep_map_data(const void *val, size_t mmap_sz, size_t val_sz)
> +{
> +       void *addr;
> +
> +       addr = kvmalloc(val_sz, GFP_KERNEL);
> +       if (!addr)
> +               return NULL;
> +       memcpy(addr, val, val_sz);
> +       return addr;
> +}

[...]+



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux