Re: [PATCH v4 bpf-next 00/13] mvneta: introduce XDP multi-buffer support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Mon, 05 Oct 2020 21:29:36 -0700
> John Fastabend <john.fastabend@xxxxxxxxx> wrote:
> 
> > Lorenzo Bianconi wrote:
> > > [...]
> > >   
> > > > 
> > > > In general I see no reason to populate these fields before the XDP
> > > > program runs. Someone needs to convince me why having frags info before
> > > > program runs is useful. In general headers should be preserved and first
> > > > frag already included in the data pointers. If users start parsing further
> > > > they might need it, but this series doesn't provide a way to do that
> > > > so IMO without those helpers its a bit difficult to debate.  
> > > 
> > > We need to populate the skb_shared_info before running the xdp program in order to
> > > allow the ebpf sanbox to access this data. If we restrict the access to the first
> > > buffer only I guess we can avoid to do that but I think there is a value allowing
> > > the xdp program to access this data.  
> > 
> > I agree. We could also only populate the fields if the program accesses
> > the fields.
> 
> Notice, a driver will not initialize/use the shared_info area unless
> there are more segments.  And (we have already established) the xdp->mb
> bit is guarding BPF-prog from accessing shared_info area. 
> 
> > > A possible optimization can be access the shared_info only once before running
> > > the ebpf program constructing the shared_info using a struct allocated on the
> > > stack.  
> > 
> > Seems interesting, might be a good idea.
> 
> It *might* be a good idea ("alloc" shared_info on stack), but we should
> benchmark this.  The prefetch trick might be fast enough.  But also
> keep in mind the performance target, as with large size frames the
> packet-per-sec we need to handle dramatically drop.

right. I guess we need to define a workload we want to run for the
xdp multi-buff use-case (e.g. if MTU is 9K we will have ~3 frames
for each packets and # of pps will be much slower)

> 
> 

[...]

> 
> I do think it makes sense to drop the helpers for now, and focus on how
> this new multi-buffer frame type is handled in the existing code, and do
> some benchmarking on higher speed NIC, before the BPF-helper start to
> lockdown/restrict what we can change/revert as they define UAPI.

ack, I will drop them in v5.

Regards,
Lorenzo

> 
> E.g. existing code that need to handle this is existing helper
> bpf_xdp_adjust_tail, which is something I have broad up before and even
> described in[1].  Lets make sure existing code works with proposed
> design, before introducing new helpers (and this makes it easier to
> revert).
> 
> [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org#xdp-tail-adjust
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
> 

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux