Re: [RFC bpf-next 0/4] Add XDP rx hw hints support performing XDP_REDIRECT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Lorenzo Bianconi <lorenzo@xxxxxxxxxx> writes:
> 
> >> > We could combine such a registration API with your header format, so
> >> > that the registration just becomes a way of allocating one of the keys
> >> > from 0-63 (and the registry just becomes a global copy of the header).
> >> > This would basically amount to moving the "service config file" into the
> >> > kernel, since that seems to be the only common denominator we can rely
> >> > on between BPF applications (as all attempts to write a common daemon
> >> > for BPF management have shown).
> >> 
> >> That sounds reasonable. And I guess we'd have set() check the global
> >> registry to enforce that the key has been registered beforehand?
> >> 
> >> >
> >> > -Toke
> >> 
> >> Thanks for all the feedback!
> >
> > I like this 'fast' KV approach but I guess we should really evaluate its
> > impact on performances (especially for xdp) since, based on the kfunc calls
> > order in the ebpf program, we can have one or multiple memmove/memcpy for
> > each packet, right?
> 
> Yes, with Arthur's scheme, performance will be ordering dependent. Using
> a global registry for offsets would sidestep this, but have the
> synchronisation issues we discussed up-thread. So on balance, I think
> the memmove() suggestion will probably lead to the least pain.
> 
> For the HW metadata we could sidestep this by always having a fixed
> struct for it (but using the same set/get() API with reserved keys). The
> only drawback of doing that is that we statically reserve a bit of
> space, but I'm not sure that is such a big issue in practice (at least
> not until this becomes to popular that the space starts to be contended;
> but surely 256 bytes ought to be enough for everybody, right? :)).

I am fine with the proposed approach, but I think we need to verify what is the
impact on performances (in the worst case??)

> 
> > Moreover, I still think the metadata area in the xdp_frame/xdp_buff is not
> > so suitable for nic hw metadata since:
> > - it grows backward 
> > - it is probably in a different cacheline with respect to xdp_frame
> > - nic hw metadata will not start at fixed and immutable address, but it depends
> >   on the running ebpf program
> >
> > What about having something like:
> > - fixed hw nic metadata: just after xdp_frame struct (or if you want at the end
> >   of the metadata area :)). Here he can reuse the same KV approach if it is fast
> > - user defined metadata: in the metadata area of the xdp_frame/xdp_buff
> 
> AFAIU, none of this will live in the (current) XDP metadata area. It
> will all live just after the xdp_frame struct (so sharing the space with
> the metadata area in the sense that adding more metadata kv fields will
> decrease the amount of space that is usable by the current XDP metadata
> APIs).
> 
> -Toke
> 

ah, ok. I was thinking the proposed approach was to put them in the current
metadata field.

Regards,
Lorenzo

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux