On 2020-05-08 15:27, Björn Töpel wrote:
On 2020-05-08 13:55, Maxim Mikityanskiy wrote:
On 2020-05-07 13:42, Björn Töpel wrote:
From: Björn Töpel <bjorn.topel@xxxxxxxxx>
Use the new MEM_TYPE_XSK_BUFF_POOL API in lieu of MEM_TYPE_ZERO_COPY in
mlx5e. It allows to drop a lot of code from the driver (which is now
common in AF_XDP core and was related to XSK RX frame allocation, DMA
mapping, etc.) and slightly improve performance.
rfc->v1: Put back the sanity check for XSK params, use XSK API to get
the total headroom size. (Maxim)
Signed-off-by: Björn Töpel <bjorn.topel@xxxxxxxxx>
Signed-off-by: Maxim Mikityanskiy <maximmi@xxxxxxxxxxxx>
I did some functional and performance tests.
Unfortunately, something is wrong with the traffic: I get zeros in
XDP_TX, XDP_PASS and XSK instead of packet data. I set DEBUG_HEXDUMP
in xdpsock, and it shows the packets of the correct length, but all
bytes are 0 after these patches. It might be wrong xdp_buff pointers,
however, I still have to investigate it. Björn, does it also affect
Intel drivers, or is it Mellanox-specific?
Are you getting zeros for TX, PASS *and* in xdpsock (REDIRECT:ed
packets), or just TX and PASS?
Yes, in all modes: XDP_TX, XDP_PASS and XDP_REDIRECT to XSK (xdpsock).
No, I get correct packet data for AF_XDP zero-copy XDP_REDIRECT,
XDP_PASS, and XDP_TX for Intel.
Hmm, weird - with the new API I expected the same behavior on all
drivers. Thanks for the information, I'll know that I need to look in
mlx5 code to find the issue.
For performance, I got +1.0..+1.2 Mpps on RX. TX performance got
better after Björn inlined the relevant UMEM functions, however, there
is still a slight decrease compared to the old code. I'll try to find
the possible reason, but the good thing is that it's not significant
anymore.
Ok, so for Rx mlx5 it's the same as for i40e. Good! :-)
How much decrease on Tx?
~0.8 Mpps (was 3.1 before you inlined the functions).
Björn