Re: XDP multi-buffer incl. jumbo-frames (Was: [RFC V1 net-next 1/1] net: ena: implement XDP drop support)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 26 Jun 2019 13:52:16 +0200
Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote:

> Jesper Dangaard Brouer <brouer@xxxxxxxxxx> writes:
> 
> > On Tue, 25 Jun 2019 03:19:22 +0000
> > "Machulsky, Zorik" <zorik@xxxxxxxxxx> wrote:
> >  
> >> On 6/23/19, 7:21 AM, "Jesper Dangaard Brouer" <brouer@xxxxxxxxxx> wrote:
> >> 
> >>     On Sun, 23 Jun 2019 10:06:49 +0300 <sameehj@xxxxxxxxxx> wrote:
> >>       
> >>     > This commit implements the basic functionality of drop/pass logic in the
> >>     > ena driver.    
> >>     
> >>     Usually we require a driver to implement all the XDP return codes,
> >>     before we accept it.  But as Daniel and I discussed with Zorik during
> >>     NetConf[1], we are going to make an exception and accept the driver
> >>     if you also implement XDP_TX.
> >>     
> >>     As we trust that Zorik/Amazon will follow and implement XDP_REDIRECT
> >>     later, given he/you wants AF_XDP support which requires XDP_REDIRECT.
> >> 
> >> Jesper, thanks for your comments and very helpful discussion during
> >> NetConf! That's the plan, as we agreed. From our side I would like to
> >> reiterate again the importance of multi-buffer support by xdp frame.
> >> We would really prefer not to see our MTU shrinking because of xdp
> >> support.     
> >
> > Okay we really need to make a serious attempt to find a way to support
> > multi-buffer packets with XDP. With the important criteria of not
> > hurting performance of the single-buffer per packet design.
> >
> > I've created a design document[2], that I will update based on our
> > discussions: [2] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
> >
> > The use-case that really convinced me was Eric's packet header-split.
> >
> >
> > Lets refresh: Why XDP don't have multi-buffer support:
> >
> > XDP is designed for maximum performance, which is why certain driver-level
> > use-cases were not supported, like multi-buffer packets (like jumbo-frames).
> > As it e.g. complicated the driver RX-loop and memory model handling.
> >
> > The single buffer per packet design, is also tied into eBPF Direct-Access
> > (DA) to packet data, which can only be allowed if the packet memory is in
> > contiguous memory.  This DA feature is essential for XDP performance.
> >
> >
> > One way forward is to define that XDP only get access to the first
> > packet buffer, and it cannot see subsequent buffers. For XDP_TX and
> > XDP_REDIRECT to work then XDP still need to carry pointers (plus
> > len+offset) to the other buffers, which is 16 bytes per extra buffer.  
> 
> Yeah, I think this would be reasonable. As long as we can have a
> metadata field with the full length + still give XDP programs the
> ability to truncate the packet (i.e., discard the subsequent pages)

You touch upon some interesting complications already:

1. It is valuable for XDP bpf_prog to know "full" length?
   (if so, then we need to extend xdp ctx with info)

 But if we need to know the full length, when the first-buffer is
 processed. Then realize that this affect the drivers RX-loop, because
 then we need to "collect" all the buffers before we can know the
 length (although some HW provide this in first descriptor).

 We likely have to change drivers RX-loop anyhow, as XDP_TX and
 XDP_REDIRECT will also need to "collect" all buffers before the packet
 can be forwarded. (Although this could potentially happen later in
 driver loop when it meet/find the End-Of-Packet descriptor bit).
 

2. Can we even allow helper bpf_xdp_adjust_tail() ?

 Wouldn't it be easier to disallow a BPF-prog with this helper, when
 driver have configured multi-buffer?  Or will it be too restrictive,
 if jumbo-frame is very uncommon and only enabled because switch infra
 could not be changed (like Amazon case).

 Perhaps it is better to let bpf_xdp_adjust_tail() fail runtime?


> I  think many (most?) use cases will work fine without having access
> to the full packet data...

I agree.  Other people should voice their concerns if they don't
agree...

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer



[Index of Archives]     [Linux Networking Development]     [Fedora Linux Users]     [Linux SCTP]     [DCCP]     [Gimp]     [Yosemite Campsites]

  Powered by Linux