Re: [PATCH v4 net-next RFC] net: Generic XDP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 19 Apr 2017 10:29:03 -0400
Andy Gospodarek <andy@xxxxxxxxxxxxx> wrote:

> I ran this on top of a card that uses the bnxt_en driver on a desktop
> class system with an i7-6700 CPU @ 3.40GHz, sending a single stream of
> UDP traffic with flow control disabled and saw the following (all stats
> in Million PPS).
> 
>                 xdp1                xdp2            xdp_tx_tunnel
> Generic XDP      7.8    5.5 (1.3 actual)         4.6 (1.1 actual)
> Optimized XDP   11.7		     9.7                      4.6
> 
> One thing to note is that the Generic XDP case shows some different
> results for reported by the application vs actual (seen on the wire).  I
> did not debug where the drops are happening and what counter needs to be
> incremented to note this -- I'll add that to my TODO list.  The
> Optimized XDP case does not have a difference in reported vs actual
> frames on the wire.

The reported application vs actual (seen on the wire) number sound scary.
How do you evaluate/measure "seen on the wire"?

Perhaps you could use ethtool -S stats to see if anything is fishy?
I recommend using my tool[1] like:

 ~/git/network-testing/bin/ethtool_stats.pl --dev mlx5p2 --sec 2

[1] https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl

I'm evaluating this patch on a mlx5 NIC, and something is not right...
I'm seeing:

 Ethtool(mlx5p2) stat:     349599 (        349,599) <= tx_multicast_phy /sec
 Ethtool(mlx5p2) stat:    4940185 (      4,940,185) <= tx_packets /sec
 Ethtool(mlx5p2) stat:     349596 (        349,596) <= tx_packets_phy /sec
 [...]
 Ethtool(mlx5p2) stat:      36898 (         36,898) <= rx_cache_busy /sec
 Ethtool(mlx5p2) stat:      36898 (         36,898) <= rx_cache_full /sec
 Ethtool(mlx5p2) stat:    4903287 (      4,903,287) <= rx_cache_reuse /sec
 Ethtool(mlx5p2) stat:    4940185 (      4,940,185) <= rx_csum_complete /sec
 Ethtool(mlx5p2) stat:    4940185 (      4,940,185) <= rx_packets /sec

Something is wrong... when I tcpdump on the generator machine, I see
garbled packets with IPv6 multicast addresses.

And it looks like I'm only sending 349,596 tx_packets_phy/sec on the "wire".

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer



[Index of Archives]     [Linux Networking Development]     [Fedora Linux Users]     [Linux SCTP]     [DCCP]     [Gimp]     [Yosemite Campsites]

  Powered by Linux