On Thu, 2019-04-11 at 19:52 +0300, David Woodhouse wrote: > On Wed, 2019-04-10 at 21:41 +0000, Phillips, Tony wrote: > > Using the "Fake Server", and doing this from the OpenConnect Client: > > > > # netperf/bin/netperf netperf -H 172.16.0.2 -t UDP_STREAM -- -m 1024 > > MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to > > 172.16.0.2 () port 0 AF_INET > > Socket Message Elapsed Messages > > Size Size Time Okay Errors Throughput > > bytes bytes secs # # 10^6bits/sec > > > > 212992 1024 10.00 12856801 0 10532.26 > > 212992 10.00 200274 164.06 > > Hm, I have broken my test setup — OpenConnect is sending packets faster > than the kernel at the other end can decrypt them, leading to dropped > RX packets on the receiving VM's eth0 interface. Let's switch to using iperf. You can limit the sending bandwidth with that. If we send more than the receive side can handle, it actually ends up receiving less than its peak capacity. So... [fedora@ip-10-0-161-101 src]$ iperf -u -c 172.16.0.2 -l 1400 -b 1700M ------------------------------------------------------------ Client connecting to 172.16.0.2, UDP port 5001 Sending 1400 byte datagrams, IPG target: 6.28 us (kalman adjust) UDP buffer size: 208 KByte (default) ------------------------------------------------------------ [ 3] local 172.16.0.1 port 43625 connected with 172.16.0.2 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 2.08 GBytes 1.78 Gbits/sec [ 3] Sent 1591590 datagrams [ 3] Server Report: [ 3] 0.0-10.0 sec 2.02 GBytes 1.73 Gbits/sec 0.002 ms 42679/1591590 (2.7%) [fedora@ip-10-0-161-101 src]$ iperf -u -c 172.16.0.2 -l 1400 -b 1800M ------------------------------------------------------------ Client connecting to 172.16.0.2, UDP port 5001 Sending 1400 byte datagrams, IPG target: 5.93 us (kalman adjust) UDP buffer size: 208 KByte (default) ------------------------------------------------------------ [ 3] local 172.16.0.1 port 51111 connected with 172.16.0.2 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 2.20 GBytes 1.89 Gbits/sec [ 3] Sent 1685213 datagrams [ 3] Server Report: [ 3] 0.0-10.0 sec 1.95 GBytes 1.68 Gbits/sec 0.002 ms 186450/1685213 (11%) [fedora@ip-10-0-161-101 src]$ iperf -u -c 172.16.0.2 -l 1400 -b 2000M ------------------------------------------------------------ Client connecting to 172.16.0.2, UDP port 5001 Sending 1400 byte datagrams, IPG target: 5.34 us (kalman adjust) UDP buffer size: 208 KByte (default) ------------------------------------------------------------ [ 3] local 172.16.0.1 port 48338 connected with 172.16.0.2 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 2.44 GBytes 2.10 Gbits/sec [ 3] Sent 1872458 datagrams [ 3] Server Report: [ 3] 0.0-10.0 sec 1.59 GBytes 1.36 Gbits/sec 0.001 ms 653669/1872458 (35%) After I hit the limit, the faster I send, the less I actually get through. Try that with limits growing from 130Mb/s or whatever your last reported result was, and see what the *actual* limit is for your test setup, and for your VPN. > Can you look at the 'ifconfig' stats on the sending box, before and > after running the UDP_STREAM test? Is it actually sending more packets > than netperf admits to receiving? > > I have also made a microbenchmark for the ESP encryption itself, in my > perfhacks branch. For me, GnuTLS is getting about 1785Mb/s, which is in > line with what I was seeing for actual data transport. My hacks2 branch now gives my 2655Mb/s :)
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ openconnect-devel mailing list openconnect-devel@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/openconnect-devel