Re: ethtool isn't showing xdp statistics

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the clarification.
I used ethtool_stats.pl script and realized total dropped packets are
sum of fdir_miss and rx_missed_errors.
Here I observed sometimes fdir_miss increase to 1-2m and
rx_missed_errors drop about same amount but their total not change.

Show adapter(s) (enp7s0f0) statistics (ONLY that changed!)
Ethtool(enp7s0f0) stat:       153818 (        153,818) <= fdir_miss /sec
Ethtool(enp7s0f0) stat:      9060176 (      9,060,176) <= rx_bytes /sec
Ethtool(enp7s0f0) stat:    946625059 (    946,625,059) <= rx_bytes_nic /sec
Ethtool(enp7s0f0) stat:     14694930 (     14,694,930) <= rx_missed_errors /sec

As you can see, In my tests I dropped about 15m packets successfully.
After that I did some latency tests and get some bad results.
I loaded a xdp code that drops only udp packets. I connected 2 packet
sender through a switch. One of them I sent flood udp ddos. From other
one I just send ping and observed latency.
Here is results.
latency when there is no attack.

# ping -c 10 10.0.0.213
PING 10.0.0.213 (10.0.0.213) 56(84) bytes of data.
64 bytes from 10.0.0.213: icmp_seq=1 ttl=64 time=0.794 ms
64 bytes from 10.0.0.213: icmp_seq=2 ttl=64 time=0.435 ms
64 bytes from 10.0.0.213: icmp_seq=3 ttl=64 time=0.394 ms
64 bytes from 10.0.0.213: icmp_seq=4 ttl=64 time=0.387 ms
64 bytes from 10.0.0.213: icmp_seq=5 ttl=64 time=0.479 ms
64 bytes from 10.0.0.213: icmp_seq=6 ttl=64 time=0.487 ms
64 bytes from 10.0.0.213: icmp_seq=7 ttl=64 time=0.458 ms
64 bytes from 10.0.0.213: icmp_seq=8 ttl=64 time=0.536 ms
64 bytes from 10.0.0.213: icmp_seq=9 ttl=64 time=0.499 ms
64 bytes from 10.0.0.213: icmp_seq=10 ttl=64 time=0.391 ms

--- 10.0.0.213 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9202ms
rtt min/avg/max/mdev = 0.387/0.486/0.794/0.113 ms


latency when there is 150k attack

# ping -c 10 10.0.0.213
PING 10.0.0.213 (10.0.0.213) 56(84) bytes of data.
64 bytes from 10.0.0.213: icmp_seq=1 ttl=64 time=43.4 ms
64 bytes from 10.0.0.213: icmp_seq=2 ttl=64 time=8.26 ms
64 bytes from 10.0.0.213: icmp_seq=4 ttl=64 time=47.1 ms
64 bytes from 10.0.0.213: icmp_seq=5 ttl=64 time=2.51 ms
64 bytes from 10.0.0.213: icmp_seq=6 ttl=64 time=1.43 ms
64 bytes from 10.0.0.213: icmp_seq=7 ttl=64 time=40.6 ms
64 bytes from 10.0.0.213: icmp_seq=8 ttl=64 time=44.2 ms
64 bytes from 10.0.0.213: icmp_seq=9 ttl=64 time=38.0 ms
64 bytes from 10.0.0.213: icmp_seq=10 ttl=64 time=50.5 ms

--- 10.0.0.213 ping statistics ---
10 packets transmitted, 9 received, 10% packet loss, time 9060ms

latency when there is 800k attack

# ping -c 10 10.0.0.213
PING 10.0.0.213 (10.0.0.213) 56(84) bytes of data.
64 bytes from 10.0.0.213: icmp_seq=4 ttl=64 time=0.395 ms
64 bytes from 10.0.0.213: icmp_seq=5 ttl=64 time=0.359 ms
64 bytes from 10.0.0.213: icmp_seq=8 ttl=64 time=30.3 ms

--- 10.0.0.213 ping statistics ---
10 packets transmitted, 3 received, 70% packet loss, time 9246ms
rtt min/avg/max/mdev = 0.359/10.376/30.376/14.142 ms

latency when there is 1.6m attack

# ping -c 10 10.0.0.213
PING 10.0.0.213 (10.0.0.213) 56(84) bytes of data.
64 bytes from 10.0.0.213: icmp_seq=2 ttl=64 time=34.7 ms

--- 10.0.0.213 ping statistics ---
10 packets transmitted, 1 received, 90% packet loss, time 9205ms
rtt min/avg/max/mdev = 34.756/34.756/34.756/0.000 ms

latency when there is 2.4m attack

# ping -c 10 10.0.0.213
PING 10.0.0.213 (10.0.0.213) 56(84) bytes of data.
>From 10.0.0.214 icmp_seq=10 Destination Host Unreachable

--- 10.0.0.213 ping statistics ---
10 packets transmitted, 0 received, +1 errors, 100% packet loss, time 9229ms

After that all ping stop as you can see. I don't know how to debug
that latency. I believe I need to do some tuning but I don't know what
it is. I tried to enable jit but nothing changed.
If xdp cause this latency, than it is useless for me. Can you help me
to understand its cause?

On Mon, Jun 10, 2019 at 1:15 PM Jesper Dangaard Brouer
<brouer@xxxxxxxxxx> wrote:
>
> On Mon, 10 Jun 2019 12:55:07 +0300
> İbrahim Ercan <ibrahim.metu@xxxxxxxxx> wrote:
>
> > Hi.
> > I'm trying to do a xdp performance test on redhat based environment.
> > To do so, I compiled kernel 5.0.13 and iproute 4.6.0
> > Then I loaded compiled code to interface with below command.
> > #ip -force link set dev enp7s0f0 xdp object xdptest.o
> >
> > After that packets dropped as expected but I can not see statistics
> > with ethtool command like below.
> > #ethtool -S enp7s0f0 | grep xdp
> >
> > ethtool version is 4.8
> > I did my test with that NIC
> > Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
> >
> > I wonder why I can't see statistics. Did I miss something while
> > compiling kernel or iproute? Should I also compile ethtool too?
>
> You did nothing wrong. Consistency for statistics with XDP is a known
> issue, see [1].  The behavior varies per driver, which obviously is bad
> from a user perspective.  You NIC is based on ixgbe driver, which don't
> have ethtool stats counters for XDP, instead it actually updates
> ifconfig counters correctly. While for mlx5 it's opposite (p.s. I use
> this[2] ethtool stats tool).
>
> We want to bring consistency in this area, but there are performance
> concerns.  As any stat counter will bring overhead, and XDP is all
> about maximum performance.  Thus, we want this counter overhead to be
> opt-in (that is not on as default).
>
> Currently you have to add the stats your want to the XDP/BPF program
> itself.  That is the current opt-in mechanism.  To help you coded this,
> we have an example here[3].
>
>
> [1] https://github.com/xdp-project/xdp-project/blob/master/xdp-project.org#consistency-for-statistics-with-xdp
> [2] https://github.com/netoptimizer/network-testing/blob/master/bin/ethtool_stats.pl
> [3] https://github.com/xdp-project/xdp-tutorial/blob/master/common/xdp_stats_kern.h
> --
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
>




[Index of Archives]     [Linux Networking Development]     [Fedora Linux Users]     [Linux SCTP]     [DCCP]     [Gimp]     [Yosemite Campsites]

  Powered by Linux