On Wed, 2019-03-27 at 14:47 +0000, Phillips, Tony wrote: > Note: I noticed y'all dropped the "list" off the email -- I'm also > omitting it assuming that was intentional. If you want this stuff to > be in the email archive, I'll let y'all Cc: it back in. Apologies, that was me when I was sending a crappy HTML top-post from my phone. We should probably add it back. > David / Daniel, how's this: > > One thing I think is weird is that the "throughput" on the VPN test > cases is in excess of 10GBE's capabilities, and these machines > actually have 1GB NICs, not 10GB. Now that's just silly. There is no compression in our ESP support, unlike with Cisco+DTLS. I don't understand how this can be possible at all. Did you say you had multiple 1Gb NICs, not just one? Do these results become any less confusing if you use only one? Can you try using netperf with the -F argument to send bytes from /dev/urandom which might be less compressible (by this compression that doesn't exist and can't be happening, but try it anyway :) Or are we misreading the results and those were packets *sent*, not packets successfully received by the other end? > If you need us to re-run this with particular options, let me know. > > > ---------------------------------------------------------------------------------------------------------------- > VPN without recent patches > > netperf -H NetPerfServerHost -t UDP_STREAM -- -m 1024 -R 1 > > MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to NetPerfServerHost () port 0 AF_INET > Socket Message Elapsed Messages > Size Size Time Okay Errors Throughput > bytes bytes secs # # 10^6bits/sec > > 212992 1024 10.00 11789372 0 9657.81 > 124928 10.00 203530 166.73 > > netperf -H NetPerfServerHost -t UDP_STREAM -- -m 512 -R 1 > > MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to NetPerfServerHost () port 0 AF_INET > Socket Message Elapsed Messages > Size Size Time Okay Errors Throughput > bytes bytes secs # # 10^6bits/sec > > 212992 512 10.00 12001625 0 4915.84 > 124928 10.00 345915 141.69 > > netperf -H NetPerfServerHost > > MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to NetPerfServerHost () port 0 AF_INET > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 87380 16384 16384 10.13 14.03 > > ---------------------------------------------------------------------------------------------------------------- > VPN with patches > > netperf -H NetPerfServerHost -t UDP_STREAM -- -m 1024 -R 1 > > MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to NetPerfServerHost () port 0 AF_INET > Socket Message Elapsed Messages > Size Size Time Okay Errors Throughput > bytes bytes secs # # 10^6bits/sec > > 212992 1024 10.00 12770666 0 10461.70 > 124928 10.00 188294 154.25 > > netperf -H NetPerfServerHost -t UDP_STREAM -- -m 512 -R 1 > > MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to NetPerfServerHost () port 0 AF_INET > Socket Message Elapsed Messages > Size Size Time Okay Errors Throughput > bytes bytes secs # # 10^6bits/sec > > 212992 512 10.00 13451809 0 5509.84 > 124928 10.00 335069 137.24 > > netperf -H NetPerfServerHost > > MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to NetPerfServerHost () port 0 AF_INET > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 87380 16384 16384 10.07 183.16 > > ---------------------------------------------------------------------------------------------------------------- > No VPN at all > > netperf -H NetPerfServerHost -t UDP_STREAM -- -m 1024 -R 1 > > MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to NetPerfServerHost () port 0 AF_INET > Socket Message Elapsed Messages > Size Size Time Okay Errors Throughput > bytes bytes secs # # 10^6bits/sec > > 124928 1024 10.00 1146407 0 939.13 > 124928 10.00 1138218 932.42 > > netperf -H NetPerfServerHost -t UDP_STREAM -- -m 512 -R 1 > > MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to NetPerfServerHost () port 0 AF_INET > Socket Message Elapsed Messages > Size Size Time Okay Errors Throughput > bytes bytes secs # # 10^6bits/sec > > 124928 512 10.00 2161509 0 885.35 > 124928 10.00 2140298 876.66 > > netperf -H NetPerfServerHost > > MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to NetPerfServerHost () port 0 AF_INET > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 87380 16384 16384 10.03 930.90 > > > > > -----Original Message----- > From: David Woodhouse [mailto:dwmw2@xxxxxxxxxxxxx] > Sent: Tuesday, March 26, 2019 3:11 PM > To: Daniel Lenski; Phillips, Tony > Cc: Nikos Mavrogiannopoulos > Subject: Re: [EXTERNAL] Re: What throughput is reasonable? > > On Tue, 2019-03-26 at 18:13 +0200, Daniel Lenski wrote: > > Awesome that David's patch is working! > > ( > > https://gitlab.com/openconnect/openconnect/commit/3014e3059d5732ca5b0954406a0e6fa74ec23148?w=1 > > ) > > > > But… why? Do you understand the underlying reason why a UDP socket is > > returning EAGAIN errors? > > > > Is it because the kernel buffers are filling up, as described here? > > https://stackoverflow.com/a/20198054/20789 > > That ought to be the only reason it should happen, surely? > > I don't know if we're *consistently* keeping the buffers full, or if > we're occasionally allowing them to drain. I know our packet handling > does have the possibility of being bursty, as it first reads all it > can. then writes all it can, instead of having a work limit for each > round. That's never been a problem in practice before, though. Only > completely theoretical from a design standpoint. > > I'd like to see those UDP netperf results, both over the VPN and the > underlying network. > > -----Original Message----- > From: David Woodhouse [mailto:dwmw2@xxxxxxxxxxxxx] > Sent: Tuesday, March 26, 2019 3:11 PM > To: Daniel Lenski; Phillips, Tony > Cc: Nikos Mavrogiannopoulos > Subject: Re: [EXTERNAL] Re: What throughput is reasonable? > > On Tue, 2019-03-26 at 18:13 +0200, Daniel Lenski wrote: > > Awesome that David's patch is working! > > ( > > https://gitlab.com/openconnect/openconnect/commit/3014e3059d5732ca5b0954406a0e6fa74ec23148?w=1 > > ) > > > > But… why? Do you understand the underlying reason why a UDP socket is > > returning EAGAIN errors? > > > > Is it because the kernel buffers are filling up, as described here? > > https://stackoverflow.com/a/20198054/20789 > > That ought to be the only reason it should happen, surely? > > I don't know if we're *consistently* keeping the buffers full, or if > we're occasionally allowing them to drain. I know our packet handling > does have the possibility of being bursty, as it first reads all it > can. then writes all it can, instead of having a work limit for each > round. That's never been a problem in practice before, though. Only > completely theoretical from a design standpoint. > > I'd like to see those UDP netperf results, both over the VPN and the > underlying network.
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ openconnect-devel mailing list openconnect-devel@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/openconnect-devel