On 11/1/22 1:12 PM, Stanislav Fomichev wrote:
As for the kfunc-based interface, I think it shows some promise.
Exposing a list of function names to retrieve individual metadata items
instead of a struct layout is sorta comparable in terms of developer UI
accessibility etc (IMO).
Looks like there are quite some use cases for hw_timestamp.
Do you think we could add it to the uapi like struct xdp_md?
The following is the current xdp_md:
struct xdp_md {
__u32 data;
__u32 data_end;
__u32 data_meta;
/* Below access go through struct xdp_rxq_info */
__u32 ingress_ifindex; /* rxq->dev->ifindex */
__u32 rx_queue_index; /* rxq->queue_index */
__u32 egress_ifindex; /* txq->dev->ifindex */
};
We could add __u64 hw_timestamp to the xdp_md so user
can just do xdp_md->hw_timestamp to get the value.
xdp_md->hw_timestamp == 0 means hw_timestamp is not
available.
Inside the kernel, the ctx rewriter can generate code
to call driver specific function to retrieve the data.
If the driver generates the code to retrieve the data, how's that
different from the kfunc approach?
The only difference I see is that it would be a more strong UAPI than
the kfuncs?
Another thing may be worth considering, some hints for some HW/driver may be
harder (or may not worth) to unroll/inline. For example, I see driver is doing
spin_lock_bh while getting the hwtstamp. For this case, keep calling a kfunc
and avoid the unroll/inline may be the right thing to do.
Yeah, I'm trying to look at the drivers right now and doing
spinlocks/seqlocks might complicate the story...
But it seems like in the worst case, the unrolled bytecode can always
resort to calling a kernel function?
unroll the common cases and call kernel function for everything else? that
should be doable. The bpf prog calling it as kfunc will have more flexibility
like this here.
(we might need to have some scratch area to preserve r1-r5 and we
can't touch r6-r9 because we are not in a real call, but seems doable;
I'll try to illustrate with a bunch of examples)
The kfunc approach can be used to *less* common use cases?
What's the advantage of having two approaches when one can cover
common and uncommon cases?
There are three main drawbacks, AFAICT:
1. It requires driver developers to write and maintain the code that
generates the unrolled BPF bytecode to access the metadata fields, which
is a non-trivial amount of complexity. Maybe this can be abstracted away
with some internal helpers though (like, e.g., a
bpf_xdp_metadata_copy_u64(dst, src, offset) helper which would spit out
the required JMP/MOV/LDX instructions?
2. AF_XDP programs won't be able to access the metadata without using a
custom XDP program that calls the kfuncs and puts the data into the
metadata area. We could solve this with some code in libxdp, though; if
this code can be made generic enough (so it just dumps the available
metadata functions from the running kernel at load time), it may be
possible to make it generic enough that it will be forward-compatible
with new versions of the kernel that add new fields, which should
alleviate Florian's concern about keeping things in sync.
3. It will make it harder to consume the metadata when building SKBs. I
think the CPUMAP and veth use cases are also quite important, and that
we want metadata to be available for building SKBs in this path. Maybe
this can be resolved by having a convenient kfunc for this that can be
used for programs doing such redirects. E.g., you could just call
xdp_copy_metadata_for_skb() before doing the bpf_redirect, and that
would recursively expand into all the kfunc calls needed to extract the
metadata supported by the SKB path?
-Toke