On 10/02, Toke Høiland-Jørgensen wrote: > Stanislav Fomichev <stfomichev@xxxxxxxxx> writes: > > > On 10/01, Toke Høiland-Jørgensen wrote: > >> Lorenzo Bianconi <lorenzo@xxxxxxxxxx> writes: > >> > >> >> On Mon Sep 30, 2024 at 1:49 PM CEST, Lorenzo Bianconi wrote: > >> >> > > Lorenzo Bianconi <lorenzo@xxxxxxxxxx> writes: > >> >> > > > >> >> > > >> > We could combine such a registration API with your header format, so > >> >> > > >> > that the registration just becomes a way of allocating one of the keys > >> >> > > >> > from 0-63 (and the registry just becomes a global copy of the header). > >> >> > > >> > This would basically amount to moving the "service config file" into the > >> >> > > >> > kernel, since that seems to be the only common denominator we can rely > >> >> > > >> > on between BPF applications (as all attempts to write a common daemon > >> >> > > >> > for BPF management have shown). > >> >> > > >> > >> >> > > >> That sounds reasonable. And I guess we'd have set() check the global > >> >> > > >> registry to enforce that the key has been registered beforehand? > >> >> > > >> > >> >> > > >> > > >> >> > > >> > -Toke > >> >> > > >> > >> >> > > >> Thanks for all the feedback! > >> >> > > > > >> >> > > > I like this 'fast' KV approach but I guess we should really evaluate its > >> >> > > > impact on performances (especially for xdp) since, based on the kfunc calls > >> >> > > > order in the ebpf program, we can have one or multiple memmove/memcpy for > >> >> > > > each packet, right? > >> >> > > > >> >> > > Yes, with Arthur's scheme, performance will be ordering dependent. Using > >> >> > > a global registry for offsets would sidestep this, but have the > >> >> > > synchronisation issues we discussed up-thread. So on balance, I think > >> >> > > the memmove() suggestion will probably lead to the least pain. > >> >> > > > >> >> > > For the HW metadata we could sidestep this by always having a fixed > >> >> > > struct for it (but using the same set/get() API with reserved keys). The > >> >> > > only drawback of doing that is that we statically reserve a bit of > >> >> > > space, but I'm not sure that is such a big issue in practice (at least > >> >> > > not until this becomes to popular that the space starts to be contended; > >> >> > > but surely 256 bytes ought to be enough for everybody, right? :)). > >> >> > > >> >> > I am fine with the proposed approach, but I think we need to verify what is the > >> >> > impact on performances (in the worst case??) > >> >> > >> >> If drivers are responsible for populating the hardware metadata before > >> >> XDP, we could make sure drivers set the fields in order to avoid any > >> >> memove() (and maybe even provide a helper to ensure this?). > >> > > >> > nope, since the current APIs introduced by Stanislav are consuming NIC > >> > metadata in kfuncs (mainly for af_xdp) and, according to my understanding, > >> > we want to add a kfunc to store the info for each NIC metadata (e.g rx-hash, > >> > timestamping, ..) into the packet (this is what Toke is proposing, right?). > >> > In this case kfunc calling order makes a difference. > >> > We can think even to add single kfunc to store all the info for all the NIC > >> > metadata (maybe via a helping struct) but it seems not scalable to me and we > >> > are losing kfunc versatility. > >> > >> Yes, I agree we should have separate kfuncs for each metadata field. > >> Which means it makes a lot of sense to just use the same setter API that > >> we use for the user-registered metadata fields, but using reserved keys. > >> So something like: > >> > >> #define BPF_METADATA_HW_HASH BIT(60) > >> #define BPF_METADATA_HW_TIMESTAMP BIT(61) > >> #define BPF_METADATA_HW_VLAN BIT(62) > >> #define BPF_METADATA_RESERVED (0xffff << 48) > >> > >> bpf_packet_metadata_set(pkt, BPF_METADATA_HW_HASH, hash_value); > >> > >> > >> As for the internal representation, we can just have the kfunc do > >> something like: > >> > >> int bpf_packet_metadata_set(field_id, value) { > >> switch(field_id) { > >> case BPF_METADATA_HW_HASH: > >> pkt->xdp_hw_meta.hash = value; > >> break; > >> [...] > >> default: > >> /* do the key packing thing */ > >> } > >> } > >> > >> > >> that way the order of setting the HW fields doesn't matter, only the > >> user-defined metadata. > > > > Can you expand on why we need the flexibility of picking the metadata fields > > here? Presumably we are talking about the use-cases where the XDP program > > is doing redirect/pass and it doesn't really know who's the final > > consumer is (might be another xdp program or might be the xdp->skb > > kernel case), so the only sensible option here seems to be store everything? > > For the same reason that we have separate kfuncs for each metadata field > when getting it from the driver: XDP programs should have the > flexibility to decide which pieces of metadata they need, and skip the > overhead of stuff that is not needed. > > For instance, say an XDP program knows that nothing in the system uses > timestamps; in that case, it can skip both the getter and the setter > call for timestamps. But doesn't it put us in the same place? Where the first (native) xdp program needs to know which metadata the final consumer wants. At this point why not propagate metadata layout as well? (or maybe I'm still missing what exact use-case we are trying to solve) > I suppose we *could* support just a single call to set the skb meta, > like: > > bpf_set_skb_meta(struct xdp_md *pkt, struct xdp_hw_meta *data); > > ...but in that case, we'd need to support some fields being unset > anyway, and the program would have to populate the struct on the stack > before performing the call. So it seems simpler to just have symmetry > between the get (from HW) and set side? :) Why not simply bpf_set_skb_meta(struct xdp_md *pkt) and let it store the metadata somewhere in xdp_md directly? (also presumably by reusing most of the existing kfuncs/xmo_xxx helpers)