On Mon, 2015-11-09 at 20:23 +0000, Simon Xiao wrote: > Thanks Eric to provide the data. I am looping Tom (as I am looking into his recent patches) and Olaf (from Suse). > > So, if I understand it correctly, you are running netperf with single > TCP connection, and you got ~26Gbps initially and got ~30Gbps after > turning the tx-usecs and tx-frames. > > Do you have a baseline on your environment for the best/max/or peak > throughput? The peak on my lab pair is about 34Gbits, usually I get this if I pin the receiving thread on a cpu, otherwise process scheduler can really hurt too much. lpaa23:~# DUMP_TCP_INFO=1 ./netperf -H lpaa24 -l 20 -Cc -T ,1 MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to lpaa24.prod.google.com () port 0 AF_INET : cpu bind tcpi_rto 201000 tcpi_ato 0 tcpi_pmtu 1500 tcpi_rcv_ssthresh 29200 tcpi_rtt 101 tcpi_rttvar 15 tcpi_snd_ssthresh 289 tpci_snd_cwnd 289 tcpi_reordering 3 tcpi_total_retrans 453 Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 87380 16384 16384 20.00 33975.99 1.27 3.36 0.147 0.389 Not too bad, I don't recall reaching more than that ever. _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel