Re: ixgbe zero-copy performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 10, 2024 at 4:28 PM Magnus Karlsson
<magnus.karlsson@xxxxxxxxx> wrote:
>
> On Wed, 10 Jul 2024 at 12:07, Srivats P <pstavirs@xxxxxxxxx> wrote:
> >
> > > > What is the expected performance for AF_XDP txpnly in zero-copy and copy modes?
> > > >
> > > > With Kernel 6.5.0 and the same ixgbe driver, this is what we see -
> > > >
> > > > ZC mode: 4.3Mpps
> > > > Copy mode: 3.3Mpps
> > > >
> > > > This doesn't seem right. Shouldn't the zero copy performance be MUCH higher?
> > >
> > > Zero-copy performance should be line rate for the ixgbe card, so
> > > somewhere around 15Mpps. SKB mode seems in the correct ballpark. Try
> > > pinning the app to a core the driver does not run on, or use busy-poll
> > > mode "-B". If you are running on a NUMA system, make sure you are
> > > running both driver and app on the NUMA node you have plugged your NIC
> > > into.
> >
> > I had forced zero-copy mode (-z) and copy mode (-c) to get the above
> > results. So the xsk creation would have failed if it were not zero
> > copy mode.
>
> Just to avoid any confusion: copy mode = skb mode for Tx. The 3Mpps
> seems about correct for that mode.
>
> > This is not a NUMA system - just one CPU with 8 cores, HT
> > disabled. Nothing else taking up CPU on the system, so I don't think
> > the app and the softirq would have been sharing a core.
>
> I would ask them so that this is not the case. It happens.
>
> > Unfortunately, this is at a customer - so I have asked them to try
> > taskset or use -B. Will update when I hear back from them.
>
> Sounds good.
>

xdpsock txonly would top at 3.5Gbps irrespective of mode (zc, copy,
busy-poll) or packet size. It turns out it was a PCIe bandwidth issue
-

[    6.573856] ixgbe 0000:03:00.0: 4.000 Gb/s available PCIe
bandwidth, limited by 5.0 GT/s PCIe x1 link at 0000:00:1c.0 (capable
of 32.000 Gb/s with 5.0 GT/s PCIe x8 link)
[    6.749966] ixgbe 0000:03:00.1: 4.000 Gb/s available PCIe
bandwidth, limited by 5.0 GT/s PCIe x1 link at 0000:00:1c.0 (capable
of 32.000 Gb/s with 5.0 GT/s PCIe x8 link)

Moving the card to a different slot fixed the problem.

Thanks for your help, Magnus!

Srivats (Founder, Ostinato)
Now generate up to 100Gbps 🚀 with the Ostinato Turbo add-on!





[Index of Archives]     [Linux Networking Development]     [Fedora Linux Users]     [Linux SCTP]     [DCCP]     [Gimp]     [Yosemite Campsites]

  Powered by Linux