On Mon, Apr 19, 2021 at 8:56 AM Lorenzo Bianconi <lorenzo.bianconi@xxxxxxxxxx> wrote: > > > On Sun, Apr 18, 2021 at 6:18 PM Jesper Dangaard Brouer > > <brouer@xxxxxxxxxx> wrote: > > > > > > On Fri, 16 Apr 2021 16:27:18 +0200 > > > Magnus Karlsson <magnus.karlsson@xxxxxxxxx> wrote: > > > > > > > On Thu, Apr 8, 2021 at 2:51 PM Lorenzo Bianconi <lorenzo@xxxxxxxxxx> wrote: > > > > > > > > > > This series introduce XDP multi-buffer support. The mvneta driver is > > > > > the first to support these new "non-linear" xdp_{buff,frame}. Reviewers > > > > > please focus on how these new types of xdp_{buff,frame} packets > > > > > traverse the different layers and the layout design. It is on purpose > > > > > that BPF-helpers are kept simple, as we don't want to expose the > > > > > internal layout to allow later changes. > > > > > > > > > > For now, to keep the design simple and to maintain performance, the XDP > > > > > BPF-prog (still) only have access to the first-buffer. It is left for > > > > > later (another patchset) to add payload access across multiple buffers. > > > > > This patchset should still allow for these future extensions. The goal > > > > > is to lift the XDP MTU restriction that comes with XDP, but maintain > > > > > same performance as before. > > > [...] > > > > > > > > > > [0] https://netdevconf.info/0x14/session.html?talk-the-path-to-tcp-4k-mtu-and-rx-zerocopy > > > > > [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org > > > > > [2] https://netdevconf.info/0x14/session.html?tutorial-add-XDP-support-to-a-NIC-driver (XDPmulti-buffers section) > > > > > > > > Took your patches for a test run with the AF_XDP sample xdpsock on an > > > > i40e card and the throughput degradation is between 2 to 6% depending > > > > on the setup and microbenchmark within xdpsock that is executed. And > > > > this is without sending any multi frame packets. Just single frame > > > > ones. Tirtha made changes to the i40e driver to support this new > > > > interface so that is being included in the measurements. > > > > > > Could you please share Tirtha's i40e support patch with me? > > > > We will post them on the list as an RFC. Tirtha also added AF_XDP > > multi-frame support on top of Lorenzo's patches so we will send that > > one out as well. Will also rerun my experiments, properly document > > them and send out just to be sure that I did not make any mistake. > > ack, very cool, thx I have now run a new set of experiments on a Cascade Lake server at 2.1 GHz with turbo boost disabled. Two NICs: i40e and ice. The baseline is commit 5c507329000e ("libbpf: Clarify flags in ringbuf helpers") and Lorenzo's and Eelco's path set is their v8. First some runs with xdpsock (i.e. AF_XDP) in both 2-core mode (app on one core and the driver on another) and 1-core mode using busy_poll. xdpsock rxdrop throughput change with the multi-buffer patches without any driver changes: 1-core i40e: -0.5 to 0% 2-cores i40e: -0.5% 1-core ice: -2% 2-cores ice: -1 to -0.5% xdp_rxq_info -a XDP_DROP i40e: -4% ice: +8% xdp_rxq_info -a XDP_TX i40e: -10% ice: +9% The XDP results with xdp_rxq_info are just weird! I reran them three times, rebuilt and rebooted in between and I always get the same results. And I also checked that I am running on the correct NUMA node and so on. But I have a hard time believing them. Nearly +10% and -10% difference. Too much in my book. Jesper, could you please run the same and see what you get? The xdpsock numbers are more in the ballpark of what I would expect. Tirtha and I found some optimizations in the i40e multi-frame/multi-buffer support that we have implemented. Will test those next, post the results and share the code. > > > > Just note that I would really like for the multi-frame support to get > > in. I have lost count on how many people that have asked for it to be > > added to XDP and AF_XDP. So please check our implementation and > > improve it so we can get the overhead down to where we want it to be. > > sure, I will do. > > Regards, > Lorenzo > > > > > Thanks: Magnus > > > > > I would like to reproduce these results in my testlab, in-order to > > > figure out where the throughput degradation comes from. > > > > > > > What performance do you see with the mvneta card? How much are we > > > > willing to pay for this feature when it is not being used or can we in > > > > some way selectively turn it on only when needed? > > > > > > Well, as Daniel says performance wise we require close to /zero/ > > > additional overhead, especially as you state this happens when sending > > > a single frame, which is a base case that we must not slowdown. > > > > > > -- > > > Best regards, > > > Jesper Dangaard Brouer > > > MSc.CS, Principal Kernel Engineer at Red Hat > > > LinkedIn: http://www.linkedin.com/in/brouer > > > > >