Re: [RFC bpf-next 0/7] bpf: netdev TX metadata

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Stanislav Fomichev <sdf@xxxxxxxxxx> writes:

> On Tue, Jun 13, 2023 at 10:18 AM Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote:
>>
>> Stanislav Fomichev <sdf@xxxxxxxxxx> writes:
>>
>> > On 06/12, Toke Høiland-Jørgensen wrote:
>> >> Some immediate thoughts after glancing through this:
>> >>
>> >> > --- Use cases ---
>> >> >
>> >> > The goal of this series is to add two new standard-ish places
>> >> > in the transmit path:
>> >> >
>> >> > 1. Right before the packet is transmitted (with access to TX
>> >> >    descriptors)
>> >> > 2. Right after the packet is actually transmitted and we've received the
>> >> >    completion (again, with access to TX completion descriptors)
>> >> >
>> >> > Accessing TX descriptors unlocks the following use-cases:
>> >> >
>> >> > - Setting device hints at TX: XDP/AF_XDP might use these new hooks to
>> >> > use device offloads. The existing case implements TX timestamp.
>> >> > - Observability: global per-netdev hooks can be used for tracing
>> >> > the packets and exploring completion descriptors for all sorts of
>> >> > device errors.
>> >> >
>> >> > Accessing TX descriptors also means that the hooks have to be called
>> >> > from the drivers.
>> >> >
>> >> > The hooks are a light-weight alternative to XDP at egress and currently
>> >> > don't provide any packet modification abilities. However, eventually,
>> >> > can expose new kfuncs to operate on the packet (or, rather, the actual
>> >> > descriptors; for performance sake).
>> >>
>> >> dynptr?
>> >
>> > Haven't considered, let me explore, but not sure what it buys us
>> > here?
>>
>> API consistency, certainly. Possibly also performance, if using the
>> slice thing that gets you a direct pointer to the pkt data? Not sure
>> about that, though, haven't done extensive benchmarking of dynptr yet...
>
> Same. Let's keep it on the table, I'll try to explore. I was just
> thinking that having less abstraction here might be better
> performance-wise.

Sure, let's evaluate this once we have performance numbers.

>> >> > --- UAPI ---
>> >> >
>> >> > The hooks are implemented in a HID-BPF style. Meaning they don't
>> >> > expose any UAPI and are implemented as tracing programs that call
>> >> > a bunch of kfuncs. The attach/detach operation happen via BPF syscall
>> >> > programs. The series expands device-bound infrastructure to tracing
>> >> > programs.
>> >>
>> >> Not a fan of the "attach from BPF syscall program" thing. These are part
>> >> of the XDP data path API, and I think we should expose them as proper
>> >> bpf_link attachments from userspace with introspection etc. But I guess
>> >> the bpf_mprog thing will give us that?
>> >
>> > bpf_mprog will just make those attach kfuncs return the link fd. The
>> > syscall program will still stay :-(
>>
>> Why does the attachment have to be done this way, exactly? Couldn't we
>> just use the regular bpf_link attachment from userspace? AFAICT it's not
>> really piggy-backing on the function override thing anyway when the
>> attachment is per-dev? Or am I misunderstanding how all this works?
>
> It's UAPI vs non-UAPI. I'm assuming kfunc makes it non-UAPI and gives
> us an opportunity to fix things.
> We can do it via a regular syscall path if there is a consensus.

Yeah, the API exposed to the BPF program is kfunc-based in any case. If
we were to at some point conclude that this whole thing was not useful
at all and deprecate it, it doesn't seem to me that it makes much
difference whether that means "you can no longer create a link
attachment of this type via BPF_LINK_CREATE" or "you can no longer
create a link attachment of this type via BPF_PROG_RUN of a syscall type
program" doesn't really seem like a significant detail to me...

>> >> > --- skb vs xdp ---
>> >> >
>> >> > The hooks operate on a new light-weight devtx_frame which contains:
>> >> > - data
>> >> > - len
>> >> > - sinfo
>> >> >
>> >> > This should allow us to have a unified (from BPF POW) place at TX
>> >> > and not be super-taxing (we need to copy 2 pointers + len to the stack
>> >> > for each invocation).
>> >>
>> >> Not sure what I think about this one. At the very least I think we
>> >> should expose xdp->data_meta as well. I'm not sure what the use case for
>> >> accessing skbs is? If that *is* indeed useful, probably there will also
>> >> end up being a use case for accessing the full skb?
>> >
>> > skb_shared_info has meta_len, buf afaik, xdp doesn't use it. Maybe I
>> > a good opportunity to unify? Or probably won't work because if
>> > xdf_frame doesn't have frags, it won't have sinfo?
>>
>> No, it won't. But why do we need this unification between the skb and
>> xdp paths in the first place? Doesn't the skb path already have support
>> for these things? Seems like we could just stick to making this xdp-only
>> and keeping xdp_frame as the ctx argument?
>
> For skb path, I'm assuming we can read sinfo->meta_len; it feels nice
> to make it work for both cases?
> We can always export metadata len via some kfunc, sure.

I wasn't referring to the metadata field specifically when talking about
the skb path. I'm wondering why we need these hooks to work for the skb
path at all? :)

-Toke





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux