Re: Pulse Connect Secure support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2019-06-10 at 18:53 +0100, David Woodhouse wrote:
> 
> I'd be grateful for any testing, especially with IPv6. If you're
> currently using the Juniper nc support, let me know if --protocol=pulse 
> works and if not, please send debug output.


So we have IPv6 working now; the fields for IPv6 DNS servers and split
routing are all implemented, and the trivial bugs in the IPv6 address
handling fixed.

However, I have encountered something strange which I think is a server
issue, and would appreciate confirmation that people see it with the
official Pulse clients too.

The problem is that the fast ESP (UDP) transport only works for
carrying the protocol that you've connected *over*. They support IPv6-
over-ESP-over-IPv6, and Legacy IP-over-ESP-over-Legacy IP, but nothing
else like IPv6-over-ESP-over-Legacy IP.

So if you've connected to the VPN server over IPv6, you can only get
fast IPV6; the Legacy IP packets have to go through the fallback TCP
connection. And conversely, if you've connected to the VPN over Legacy
IP, your IPv6 packets have to take the slow route over TCP.

Testing against a dual-stack server, the nasty hack at 
http://git.infradead.org/users/dwmw2/openconnect.git/commitdiff/49a5d865cd42ac5
appears to work correctly to work around this, but I'm kind of hoping
I've missed something and it's not necessary.

This is what I've tested:

I've connected to that same server with the nc protocol, and it doesn't
even *offer* me ESP parameters when I connect to it over IPv6. That
makes sense, because nc doesn't support IPv6 and if the ESP tunnel
would only support IPv6 then there's no point in configuring it. If
connect over Legacy IP, and ESP works as usual (for Legacy IP only of
course).

I've watched a Windows client connecting over Legacy IP, and deciding
to send only Legacy IP packets in ESP while it sends IPv6 within the
TCP connection. 

You should be able to reproduce this fairly easily with the official
client. Just connect to your server over Legacy IP, and pass Legacy IP
traffic through the VPN... you'll see UDP traffic over the public
network to port 4500. Then pass IPv6 traffic through the VPN, and
you'll see it using TCP instead. Connect to your server over IPv6
instead of Legacy IP, and it'll be the opposite.


It's weird, because the ESP frame *has* a Next-Header field which
specifies what's encapsulated within, and they even *check* it (if you
send an ESP frame containing IPv6 without setting the Next-Header field
correctly to 0x29, it doesn't get through even when you're connected
over IPv6). So I can't imagine why it needs to be tied to the protocol
it arrived on, unless it's something to do with the acceleration, where
a packet received over one protocol can't then be injected into the
other stack after decryption?


If you can reproduce, you might want to file a support ticket. It
shouldn't be falling back to slow TCP.

If you can't reproduce, please show me. I'd love to know what's
different. Perhaps the server is working around a historical(?) client
bug and I just need to do something to indicate that my client
*doesn't* have that bug. Or perhaps it is that hardware acceleration
limitation, and some servers will *tell* us that they have it fixed?

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
openconnect-devel mailing list
openconnect-devel@xxxxxxxxxxxxxxxxxxx
http://lists.infradead.org/mailman/listinfo/openconnect-devel

[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux