On Mon, 2019-03-25 at 19:50 +0000, Phillips, Tony wrote: > Okay. Now we're on to something. > > After applying that patch (and the minor edit) It's now noticeably > faster. I average now 18-20 MBytes/sec (as opposed to 1.9 MB/sec) It took me a while to work out why that actually helps, since AFAICT all I'd done is push the drop behaviour one step back up the stack. Before, we dropped packets in OpenConnect when they got -EAGAIN. Now, the tun device drops them. This is better because they're properly accounted as dropped packets and because the loss is Not Our Fault™. And because at least theoretically those packets ought to be still associated with the sending socket, and it ought to *know* they haven't been sent. But the tun device doesn't seem to *do* that, so they're just getting dropped anyway.... except I suspect the real difference is that we were wasting cycles encrypting them before we dropped them, and now we're dropping them earlier. Either way, my test setup really does seem to be getting the same performance as the raw crypto test, which is basically all I can aspire to unless I start doing threading. Even the copy to/from userspace seems to have disappeared in the noise. I'd love to work out what's different in your setup. Are we sure your gnutls is really using aes-ni? Can we compare with what the PA client does? As Dan asked, can you run OpenConnect and the PA client back-to-back on precisely the same setup? I'd like to see if we have packet drops on the PA client's tun device, and how much CPU it's using while it handles the traffic.
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ openconnect-devel mailing list openconnect-devel@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/openconnect-devel