Re: [PATCH bpf-next 2/5] libbpf: Prepare light skeleton for the kernel.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 07, 2022 at 12:04:14PM -0800, Andrii Nakryiko wrote:
> >   */
> >  struct bpf_map_desc {
> > -       union {
> > -               /* input for the loader prog */
> > -               struct {
> > -                       __aligned_u64 initial_value;
> > -                       __u32 max_entries;
> > -               };
> > +       struct {
> 
> Is this anonymous struct still needed?

Right. Will remove.

> > +static inline void *skel_alloc(size_t size)
> > +{
> > +       return kcalloc(1, size, GFP_KERNEL);
> > +}
> > +static inline void skel_free(const void *p)
> > +{
> > +       kfree(p);
> > +}
> 
> any reason to skim on empty lines between functions? The rest of this
> file (and libbpf code in general) feels very different in terms of
> spacing.

Because it's more compact, but I don't mind extra lines.

> > +static inline void skel_free_map_data(void *p, __u64 addr, size_t sz)
> > +{
> > +       if (addr && addr != ~0ULL)
> > +               vm_munmap(addr, sz);
> > +       if (addr != ~0ULL)
> > +               kvfree(p);
> 
> minor nit: a small comment explaining that we set addr to ~0ULL on
> error (but we still call skel_free_map_data) would help a bit here

ok.

> > +}
> > +/* skel->bss/rodata maps are populated in three steps.
> > + *
> > + * For kernel use:
> > + * skel_prep_map_data() allocates kernel memory that kernel module can directly access.
> > + * skel_prep_init_value() allocates a region in user space process and copies
> > + * potentially modified initial map value into it.
> > + * The loader program will perform copy_from_user() from maps.rodata.initial_value.
> 
> I'm missing something here. If a light skeleton is used from a kernel
> module, then this initialization data is also pointing to kernel
> module memory, no? So why the copy_from_user() then?
> 
> Also this vm_mmap() and vm_munmap(), is it necessary for in-kernel
> skeleton itself, or it's required so that if some user-space process
> would fetch that BPF map by ID and tried to mmap() its content it
> would be possible? Otherwise it's not clear, as kernel module can
> access BPF array's value pointer directly anyways, so why the mmaping?

vm_mmap step only to preserve one version of light skeleton that
works for both user space and kernel. Otherwise bpftool would need another flag
and test coverage would need to increase. This way light skeleton for kernel
doesn't need a bunch of new tests.
Another option would be to add 'is_kernel' flag to bpf_loader_ctx and generate
loader program like:
if (ctx->is_kernel)
  bpf_probe_read_kernel
else
  bpf_copy_from_user

but 'ctx' will be modified after signature check, so the user space user
of light skel might trick it to populate maps with garbage kernel data.
The loader prog needs cap_perfmon anyway, so there are no security concerns,
but it's not great to have such room for error.
vm_mmap approach is not pretty, but looks to be the lesser evil.
I'm all ears if there are other options.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux