Re: [PATCH V3 net-next] net: fec: add XDP_TX feature support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 03/08/2023 13.18, Wei Fang wrote:
On 03/08/2023 05.58, Wei Fang wrote:
                   } else {
-                 xdp_return_frame(xdpf);
+                 xdp_return_frame_rx_napi(xdpf);
If you implement Jesper's syncing suggestions, I think you can use

    page_pool_put_page(pool, page, 0, true);
To Jakub, using 0 here you are trying to bypass the DMA-sync (which is valid
as driver knows XDP_TX have already done the sync).
The code will still call into DMA-sync calls with zero as size, so wonder if we
should detect size zero and skip that call?
(I mean is this something page_pool should support.)

[...]


for XDP_TX here to avoid the DMA sync on page recycle.
I tried Jasper's syncing suggestion and used page_pool_put_page() to
recycle pages, but the results does not seem to improve the
performance of XDP_TX,
The optimization will only have effect on those devices which have
dev->dma_coherent=false else DMA function [1] (e.g.
dma_direct_sync_single_for_device) will skip the sync calls.

 [1] https://elixir.bootlin.com/linux/v6.5-rc4/source/kernel/dma/direct.h#L63

(Cc. Andrew Lunn)
Does any of the imx generations have dma-noncoherent memory?

And does any of these use the fec NIC driver?

it even degrades the speed.

Could be low runs simply be a variation between your test runs?

Maybe, I just tested once before. So I test several times again, the
results of the two methods do not seem to be much different so far,
both about 255000 pkt/s.

The specific device (imx8mpevk) this was tested on, clearly have
dma_coherent=true, or else we would have seen a difference.
But the code change should not have any overhead for the
dma_coherent=true case, the only extra overhead is the extra empty DMA
sync call with size zero (as discussed in top).

The FEC of i.MX8MP-EVK has dma_coherent=false, and as I mentioned
above, I did not see an obvious difference in the performance. :(

That is surprising - given the results.

(see below, lack of perf/diff might be caused by Ethernet flow-control).


The result of the current modification.
root@imx8mpevk:~# ./xdp2 eth0
proto 17:     260180 pkt/s

These results are*significantly*  better than reported in patch-1.
What happened?!?

The test environment is slightly different, in patch-1, the FEC port was
directly connected to the port of another board. But in the latest test,
the ports of the two boards were connected to a switch, so the ports of
the two boards are not directly connected.


Hmm, I've seen this kind of perf behavior of direct-connected or via
switch before. The mistake I made was, that I had not disabled Ethernet
flow-control.  The xdp2 XDP_TX program will swap the mac addresses, and
send the packet back to the packet generator (running pktgen), which
will get overloaded itself and starts sending Ethernet flow-control
pause frames.

Command line to disable:
 # ethtool -A eth0 rx off tx off

Can I ask/get you to make sure that Ethernet flow-control is disabled
(on both generator and DUT (to be on safe-side)) and run the test again?

--Jesper

e.g.
   root@imx8mpevk:~# ./xdp2 eth0
   proto 17:     135817 pkt/s
   proto 17:     142776 pkt/s

proto 17:     260373 pkt/s
proto 17:     260363 pkt/s
proto 17:     259036 pkt/s
[...]

After using the sync suggestion, the result shows as follow.
root@imx8mpevk:~# ./xdp2 eth0
proto 17:     255956 pkt/s
proto 17:     255841 pkt/s
proto 17:     255835 pkt/s




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux