Re: XDP and AF_XDP performance comparison

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 22/09/22 20:38, Toke Høiland-Jørgensen wrote:
Hi Federico

Thank you for the link! All in all I thought it was a nicely done
performance comparison.

Dear Toke,
thank you very much for your observations and your interest in my work.

One thing that might be interesting would be to do the same comparison
on a different driver. A lot of the performance details you're
discovering in this paper boils down to details about how the driver
data path is implemented. For instance, it's an Intel-specific thing
that there's a whole separate path for zero-copy AF_XDP. Any plans to
replicate the study using, say, an mlx5-based NIC?

The impact of the driver on the results was clear from the beginning, however I wasn’t aware of mlx5 using the same path for XDP and zc AF_XDP, I thought different paths was the norm (my bad for not checking). This could radically change results for NVIDIA NICs. I performed similar (but less extensive) tests on a X540 NIC running the ixgbe driver and the results show similar behavior in the relation between XDP and AF_XDP even though the performance gaps are smaller. Another factor that impacts results is the kernel version: again, same relation between XDP and AF_XDP results but different gaps. In particular I experienced significant performance drops (of both XDP and AF_XDP) moving from kernel 5.15 to 5.16 and another one from 5.18 to 5.19 (the latter much more consistent). Unfortunately I don’t have any mlx5 NICs at disposal in my lab at the moment. If you are aware of any way I could experiment on an NVIDIA NIC (I know there are some open testbeds) that would be very interesting.

Also, a couple of comments on details:

- The performance delta you show in Figure 9 where AF_XDP is faster at
   hair-pin forwarding than XDP was a bit puzzling; the two applications
   should basically be doing the same thing. It seems to be because the
   i40e driver converts the xdp_buff struct to an xdp_frame before
   transmitting it out the interface again:

   https://elixir.bootlin.com/linux/latest/source/drivers/net/ethernet/intel/i40e/i40e_txrx.c#L2280

For what concerns XDP_TX performance with AF_XDP sockets enabled (XDP-sk in the draft) this is definitely the case, since the conversion from xdp_buff to xdp_frame requires a copy of the whole packet in a new memory page:
https://elixir.bootlin.com/linux/latest/source/net/core/xdp.c#L559

For pure XDP (no AF_XDP sockets enabled) on the other hand, the conversion only requires copying some fields. However, given the very limited size of the packet processing function (macswap), those copies might have a significant impact. This would also explain why the gap between XDP and AF_XDP shrinks so much when we move from macswap (+29%) to the load balancer (+14%). However it seems to me that the conversion is common to all drivers, not specific of Intel, so I wonder if it can be avoided (maybe relying only on the xdp_frame?).

- It's interesting that userspace seems to handle scattered memory
   accesses over a large range better than kernel-space. It would be
   interesting to know why; you mention you're leaving this to future
   studies, any plans of following up and trying to figure this out? :)

This is definitely the most curious result. Given my limited (but improving) knowledge of XDP and AF_XDP internals I limited myself to observing this behavior. The key point to move on I think would be mapping the additional LLC store operation that XDP needs for every packet (even when dropping them) to some code in the driver/XDP subsystem. This basically causes XDP-based I/O to have almost double LLC occupancy w.r.t. AF_XDP-based one (checking if this is Intel-specific of applies also to NVIDIA would also help narrow the possibilities). Any guidance on how to further inspect the problem would be really appreciated.

Finally, since you seem to have your tests packaged up nicely, do you
think it would be possible to take (some of) them and turn them into a
kind of "performance CI" test suite, that can be run automatically, or
semi-automatically to catch future performance regressions in the XDP
stack? Such a test suite would be pretty great to have so we can avoid
the "death by a thousand paper cuts" type of gradual performance
degradation as we add new features...

I would be very happy if my work could benefit the community. Please let me know if you have any idea or guideline on how my testing suite could be integrated in the XDP environment, I guess the xdp-tools repo could be the ideal target?

Best regards,
Federico



[Index of Archives]     [Linux Networking Development]     [Fedora Linux Users]     [Linux SCTP]     [DCCP]     [Gimp]     [Yosemite Campsites]

  Powered by Linux