On Wed, 26 Jun 2019 17:14:32 +0200 Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote: > Jesper Dangaard Brouer <brouer@xxxxxxxxxx> writes: > > > On Wed, 26 Jun 2019 13:52:16 +0200 > > Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote: > > > >> Jesper Dangaard Brouer <brouer@xxxxxxxxxx> writes: > >> > >> > On Tue, 25 Jun 2019 03:19:22 +0000 > >> > "Machulsky, Zorik" <zorik@xxxxxxxxxx> wrote: > >> > > >> >> On 6/23/19, 7:21 AM, "Jesper Dangaard Brouer" <brouer@xxxxxxxxxx> wrote: > >> >> > >> >> On Sun, 23 Jun 2019 10:06:49 +0300 <sameehj@xxxxxxxxxx> wrote: > >> >> > >> >> > This commit implements the basic functionality of drop/pass logic in the > >> >> > ena driver. > >> >> > >> >> Usually we require a driver to implement all the XDP return codes, > >> >> before we accept it. But as Daniel and I discussed with Zorik during > >> >> NetConf[1], we are going to make an exception and accept the driver > >> >> if you also implement XDP_TX. > >> >> > >> >> As we trust that Zorik/Amazon will follow and implement XDP_REDIRECT > >> >> later, given he/you wants AF_XDP support which requires XDP_REDIRECT. > >> >> > >> >> Jesper, thanks for your comments and very helpful discussion during > >> >> NetConf! That's the plan, as we agreed. From our side I would like to > >> >> reiterate again the importance of multi-buffer support by xdp frame. > >> >> We would really prefer not to see our MTU shrinking because of xdp > >> >> support. > >> > > >> > Okay we really need to make a serious attempt to find a way to support > >> > multi-buffer packets with XDP. With the important criteria of not > >> > hurting performance of the single-buffer per packet design. > >> > > >> > I've created a design document[2], that I will update based on our > >> > discussions: [2] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org > >> > > >> > The use-case that really convinced me was Eric's packet header-split. > >> > > >> > > >> > Lets refresh: Why XDP don't have multi-buffer support: > >> > > >> > XDP is designed for maximum performance, which is why certain driver-level > >> > use-cases were not supported, like multi-buffer packets (like jumbo-frames). > >> > As it e.g. complicated the driver RX-loop and memory model handling. > >> > > >> > The single buffer per packet design, is also tied into eBPF Direct-Access > >> > (DA) to packet data, which can only be allowed if the packet memory is in > >> > contiguous memory. This DA feature is essential for XDP performance. > >> > > >> > > >> > One way forward is to define that XDP only get access to the first > >> > packet buffer, and it cannot see subsequent buffers. For XDP_TX and > >> > XDP_REDIRECT to work then XDP still need to carry pointers (plus > >> > len+offset) to the other buffers, which is 16 bytes per extra buffer. > >> > >> Yeah, I think this would be reasonable. As long as we can have a > >> metadata field with the full length + still give XDP programs the > >> ability to truncate the packet (i.e., discard the subsequent pages) > > > > You touch upon some interesting complications already: > > > > 1. It is valuable for XDP bpf_prog to know "full" length? > > (if so, then we need to extend xdp ctx with info) > > > > But if we need to know the full length, when the first-buffer is > > processed. Then realize that this affect the drivers RX-loop, because > > then we need to "collect" all the buffers before we can know the > > length (although some HW provide this in first descriptor). > > > > We likely have to change drivers RX-loop anyhow, as XDP_TX and > > XDP_REDIRECT will also need to "collect" all buffers before the packet > > can be forwarded. (Although this could potentially happen later in > > driver loop when it meet/find the End-Of-Packet descriptor bit). > > A few more points (mostly thinking out loud here): > > - In any case we probably need to loop through the subsequent > descriptors in all cases, right? (i.e., if we run XDP on first > segment, and that returns DROP, the rest that are part of the packet > still need to be discarded). So we may as well delay XDP execution > until we have the whole packet? For the XDP_DROP case, drivers usually have way to discard remaining buffers/segments, to handle error cases. But it heavily depend on the driver, how tricky/convoluted this code is... Generally I would say it would make sense to delay XDP execution until all buffers/segments are "collected". It would be the clean approach, but will likely require refactoring of driver level code. > - Will this allow us to run XDP on hardware-assembled GRO super-packets? Big YES. This is usually called LRO or TSO packets. And yes, I also want to support this use-case, which is also listed in [2]. If we go down this road, this use-case is also important. (Especially related to my alloc SKBs outside drivers[3], this is a hardware offload we must support). [2] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org [3] http://vger.kernel.org/netconf2019_files/xdp-metadata-discussion.pdf -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer