On 10/01, Toke Høiland-Jørgensen wrote: > Lorenzo Bianconi <lorenzo@xxxxxxxxxx> writes: > > >> On Mon Sep 30, 2024 at 1:49 PM CEST, Lorenzo Bianconi wrote: > >> > > Lorenzo Bianconi <lorenzo@xxxxxxxxxx> writes: > >> > > > >> > > >> > We could combine such a registration API with your header format, so > >> > > >> > that the registration just becomes a way of allocating one of the keys > >> > > >> > from 0-63 (and the registry just becomes a global copy of the header). > >> > > >> > This would basically amount to moving the "service config file" into the > >> > > >> > kernel, since that seems to be the only common denominator we can rely > >> > > >> > on between BPF applications (as all attempts to write a common daemon > >> > > >> > for BPF management have shown). > >> > > >> > >> > > >> That sounds reasonable. And I guess we'd have set() check the global > >> > > >> registry to enforce that the key has been registered beforehand? > >> > > >> > >> > > >> > > >> > > >> > -Toke > >> > > >> > >> > > >> Thanks for all the feedback! > >> > > > > >> > > > I like this 'fast' KV approach but I guess we should really evaluate its > >> > > > impact on performances (especially for xdp) since, based on the kfunc calls > >> > > > order in the ebpf program, we can have one or multiple memmove/memcpy for > >> > > > each packet, right? > >> > > > >> > > Yes, with Arthur's scheme, performance will be ordering dependent. Using > >> > > a global registry for offsets would sidestep this, but have the > >> > > synchronisation issues we discussed up-thread. So on balance, I think > >> > > the memmove() suggestion will probably lead to the least pain. > >> > > > >> > > For the HW metadata we could sidestep this by always having a fixed > >> > > struct for it (but using the same set/get() API with reserved keys). The > >> > > only drawback of doing that is that we statically reserve a bit of > >> > > space, but I'm not sure that is such a big issue in practice (at least > >> > > not until this becomes to popular that the space starts to be contended; > >> > > but surely 256 bytes ought to be enough for everybody, right? :)). > >> > > >> > I am fine with the proposed approach, but I think we need to verify what is the > >> > impact on performances (in the worst case??) > >> > >> If drivers are responsible for populating the hardware metadata before > >> XDP, we could make sure drivers set the fields in order to avoid any > >> memove() (and maybe even provide a helper to ensure this?). > > > > nope, since the current APIs introduced by Stanislav are consuming NIC > > metadata in kfuncs (mainly for af_xdp) and, according to my understanding, > > we want to add a kfunc to store the info for each NIC metadata (e.g rx-hash, > > timestamping, ..) into the packet (this is what Toke is proposing, right?). > > In this case kfunc calling order makes a difference. > > We can think even to add single kfunc to store all the info for all the NIC > > metadata (maybe via a helping struct) but it seems not scalable to me and we > > are losing kfunc versatility. > > Yes, I agree we should have separate kfuncs for each metadata field. > Which means it makes a lot of sense to just use the same setter API that > we use for the user-registered metadata fields, but using reserved keys. > So something like: > > #define BPF_METADATA_HW_HASH BIT(60) > #define BPF_METADATA_HW_TIMESTAMP BIT(61) > #define BPF_METADATA_HW_VLAN BIT(62) > #define BPF_METADATA_RESERVED (0xffff << 48) > > bpf_packet_metadata_set(pkt, BPF_METADATA_HW_HASH, hash_value); > > > As for the internal representation, we can just have the kfunc do > something like: > > int bpf_packet_metadata_set(field_id, value) { > switch(field_id) { > case BPF_METADATA_HW_HASH: > pkt->xdp_hw_meta.hash = value; > break; > [...] > default: > /* do the key packing thing */ > } > } > > > that way the order of setting the HW fields doesn't matter, only the > user-defined metadata. Can you expand on why we need the flexibility of picking the metadata fields here? Presumably we are talking about the use-cases where the XDP program is doing redirect/pass and it doesn't really know who's the final consumer is (might be another xdp program or might be the xdp->skb kernel case), so the only sensible option here seems to be store everything?