On Wed, 2019-04-10 at 21:41 +0000, Phillips, Tony wrote: > Using the "Fake Server", and doing this from the OpenConnect Client: > > # netperf/bin/netperf netperf -H 172.16.0.2 -t UDP_STREAM -- -m 1024 > MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to > 172.16.0.2 () port 0 AF_INET > Socket Message Elapsed Messages > Size Size Time Okay Errors Throughput > bytes bytes secs # # 10^6bits/sec > > 212992 1024 10.00 12856801 0 10532.26 > 212992 10.00 200274 164.06 Hm, I have broken my test setup — OpenConnect is sending packets faster than the kernel at the other end can decrypt them, leading to dropped RX packets on the receiving VM's eth0 interface. Can you look at the 'ifconfig' stats on the sending box, before and after running the UDP_STREAM test? Is it actually sending more packets than netperf admits to receiving? I have also made a microbenchmark for the ESP encryption itself, in my perfhacks branch. For me, GnuTLS is getting about 1785Mb/s, which is in line with what I was seeing for actual data transport. The OpenSSL build does better in the microbenchmark with 1899Mb/s. And then I threw in a stitched AES-CBC + SHA1 implementation which took it up to 2346Mb/s... and some other changes make it 2610Mb/s. I'm going to split out the stitched implementation into a separate aesni-esp.c alongside the GnuTLS and OpenSSL ones. In the meantime can you try tests/esptest with both GnuTLS and OpenSSL builds, and see how each one actually works for really *sending* data based on the eth0 interface statistics?
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ openconnect-devel mailing list openconnect-devel@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/openconnect-devel