Re: [PATCH bpf-next v2 02/28] bpf: introduce hid program type

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 7, 2022 at 10:39 AM Benjamin Tissoires
<benjamin.tissoires@xxxxxxxxxx> wrote:
>
> On Sat, Mar 5, 2022 at 1:03 AM Song Liu <song@xxxxxxxxxx> wrote:
> >
> > On Fri, Mar 4, 2022 at 9:31 AM Benjamin Tissoires
> > <benjamin.tissoires@xxxxxxxxxx> wrote:
> > >
> > > HID is a protocol that could benefit from using BPF too.
> >
> > [...]
> >
> > > +#include <linux/list.h>
> > > +#include <linux/slab.h>
> > > +
> > > +struct bpf_prog;
> > > +struct bpf_prog_array;
> > > +struct hid_device;
> > > +
> > > +enum bpf_hid_attach_type {
> > > +       BPF_HID_ATTACH_INVALID = -1,
> > > +       BPF_HID_ATTACH_DEVICE_EVENT = 0,
> > > +       MAX_BPF_HID_ATTACH_TYPE
> >
> > Is it typical to have different BPF programs for different attach types?
> > Otherwise, (different types may have similar BPF programs), maybe
> > we can pass type as an argument to the program (shared among
> > different types)?
>
> Not quite sure I am entirely following you, but I consider the various
> attach types to be quite different and thus you can not really reuse
> the same BPF program with 2 different attach types.
>
> In my view, we have 4 attach types:
> - BPF_HID_ATTACH_DEVICE_EVENT: called whenever we receive an IRQ from
> the given device (so this is net-like event stream)
> - BPF_HID_ATTACH_RDESC_FIXUP: there can be only one of this type, and
> this is called to change the device capabilities. So you can not reuse
> the other programs for this one
> - BPF_HID_ATTACH_USER_EVENT: called explicitly by the userspace
> process owning the program. There we can use functions that are
> sleeping (we are not in IRQ context), so this is also fundamentally
> different from the 3 others.
> - BPF_HID_ATTACH_DRIVER_EVENT: whenever the driver gets called into,
> we get a bpf program run. This can be suspend/resume, or even specific
> request to the device (change a feature on the device or get its
> current state). Again, IMO fundamentally different from the others.
>
> So I'm open to any suggestions, but if we can keep the userspace API
> being defined with different SEC in libbpf, that would be the best.

Thanks for this information. Different attach_types sound right for the use
case.

>
> >
> > [...]
> >
> > > +struct hid_device;
> > > +
> > > +enum hid_bpf_event {
> > > +       HID_BPF_UNDEF = 0,
> > > +       HID_BPF_DEVICE_EVENT,           /* when attach type is BPF_HID_DEVICE_EVENT */
> > > +};
> > > +
> > > +struct hid_bpf_ctx {
> > > +       enum hid_bpf_event type;        /* read-only */
> > > +       __u16 allocated_size;           /* the allocated size of data below (RO) */
> >
> > There is a (6-byte?) hole here.
> >
> > > +       struct hid_device *hdev;        /* read-only */
> > > +
> > > +       __u16 size;                     /* used size in data (RW) */
> > > +       __u8 data[];                    /* data buffer (RW) */
> > > +};
> >
> > Do we really need hit_bpf_ctx in uapi? Maybe we can just use it
> > from vmlinuxh?
>
> I had a thought at this context today, and I think I am getting to the
> limit of what I understand.
>
> My first worry is that the way I wrote it there, with a variable data
> field length is that this is not forward compatible. Unless BTF and
> CORE are making magic, this will bite me in the long run IMO.
>
> But then, you are talking about not using uapi, and I am starting to
> wonder: am I doing the things correctly?
>
> To solve my first issue (and the weird API I had to introduce in the
> bpf_hid_get/set_data), I came up to the following:
> instead of exporting the data directly in the context, I could create
> a helper bpf_hid_get_data_buf(ctx, const uint size) that returns a
> RET_PTR_TO_ALLOC_MEM_OR_NULL in the same way bpf_ringbuf_reserve()
> does.
>
> This way, I can directly access the fields within the bpf program
> without having to worry about the size.
>
> But now, I am wondering whether the uapi I defined here is correct in
> the way CORE works.
>
> My goal is to have HID-BPF programs to be CORE compatible, and not
> have to recompile them depending on the underlying kernel.
>
> I can not understand right now if I need to add some other BTF helpers
> in the same way the access to struct xdp_md and struct xdp_buff are
> converted between one and other, or if defining a forward compatible
> struct hid_bpf_ctx is enough.
> As far as I understand, .convert_ctx_access allows to export a stable
> uapi to the bpf prog users with the verifier doing the conversion
> between the structs for me. But is this really required for all the
> BPF programs if we want them to be CORE?
>
> Also, I am starting to wonder if I should not hide fields in the
> context to the users. The .data field could be a pointer and only
> accessed through the helper I mentioned above. This would be forward
> compatible, and also allows to use whatever available memory in the
> kernel to be forwarded to the BPF program. This way I can skip the
> memcpy part and work directly with the incoming dma data buffer from
> the IRQ.
>
> But is it best practice to do such a thing?

I think .convert_ctx_access is the way to go if we want to access the data
buffer without memcpy. I am not sure how much work is needed to make
it compatible with CORE though.

To make sure I understand the case, do we want something like

bpf_prog(struct hid_bpf_ctx *ctx)
{
    /* makes sure n < ctx->size */
    x = ctx->data[n]; /* read data */
    ctx->data[n] = <something>; /* write data */
    ctx->size = <something <= n>; /* change data size */
}

We also need it to be CORE, so that we may modify hid_bpf_ctx by
inserting more members to it before data.

Is this accurate?

Song



[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux