Search squid archive

Re: tcp_outgoing_address and HTTPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21/03/18 08:12, Michael Pro wrote:
> Totally agree with you, and at the same time - do not agree. But,
> consider the following situation. There is https://site.net/ where
> there is 1.jpg and 2.jpg. If I download from this site 1.jpg from the
> address 1.1.1.1 and 2.jpg from the address 2.2.2.2.

There is no such concept as "site" in TLS. It is a point-to-point protocol.
The client opened a single connection and sent two requests to what it
perceives to be a *single* server. If the interception proxy were not
there the content would have been served by that same server anyway.
There is no loss for the proxy to mimic the exact behaviour of the real
client.
 Also the responses have to be served to the client sequentially. So
there is little gain from fetching them any way but sequentially, AND
going to all the trouble of multiple TCP + TLS handshakes adds CPU + RAM
+ socket + time costs to the transaction. So it is a net negative to do
as proposed.


> Even more. There
> are situations when you need to release a certain connection to the
> Internet through a single provider (for example, mobile), but you need
> to download the largest file that is never physically downloaded by
> this connection. On another no way. Squid theoretically these are
> several computers (by the number of incoming connections) and, what
> prevents us as different computers from using different outgoing
> interfaces even for the same origin address?

The machine-specific TLS crypto keys. In RSA it was possible to copy
these keys between machines (but considered very bad practice). In DH
and ECDH new secret keys are generated for every individual TLS
handshake. They cannot be shared. Once those keys are started being used
the data inside (particular signed items) is locked to them.

To stop an HTTPS transfer mid-delivery requires the proxy to abort both
client and server TLS (and TCP) connections. Which is what the pinning does.

> 
> I'm not saying that you need to push the unbroken.

That sentence does not compute for me.

> Look at the problem
> from the other side.
> 
> For example, in Chrome, I set up a proxy 1.1.1.1 and download
> https://site.net/1.jpg. At the same time in Mozilla I set up a proxy
> 2.2.2.2 and download https://site.net/2.jpg. What's the difference if
> you set up the same one squid?

The differences are:
1) Squid is not a browser.
2) Squid is not the TLS "end-client".
2) Squid is not the TLS origin server.
3) different TLS sessions
4) different client TLS security keys
5) different server TLS security keys

Overall there is a 3-way TLS "origin":

  Chrome + 1.1.1.1:port + the specific IP:443 address of "site.net" that
Chrome chose to connect to.

  Mozilla + 2.2.2.2:port + the specific IP:443 address of "site.net"
that Mozilla chose to connect to.


> acl 1s-jpg url_regex ...1.jpg
> acl 2s-jpg url_regex ... 2.jpg
> tcp_outgoing_address 1.1.1.1 1s-jpg
> tcp_outgoing_address 2.2.2.2 2s-jpg
> 
> Where is the entrance here?

Please explain this word "entrance" as you mean it?

1.1.1.1 does not hold the dynamically created TLS state inside 2.2.2.2.

2.2.2.2 does not hold the dynamically created TLS state inside 1.1.1.1.

Squid does not hold the TLS state inside the client.

What Squid can do is deliver content in its cache (if caching is
permitted by the origin who generated it). Or deliver the encrypted
traffic to the origin server the client accepted TLS handshake with.


You are perhapse still thinking in traditional caching terms where Squid
is a client independent of the Browser and where TCP connections can be
freely disconnected and rewritten per HTTP request. In plain-text HTTP
that would be true.

When intercepting TLS / HTTPS that is false. The true end-client /
Browser is maintaining client-to-origin state in its TLS properties. For
*precisely* and intentionally the purpose of preventing exactly this
type of traffic rewriting by a proxy.


IF, and *only* if, the client is using "TLS explicit" as defined by TLS
to the proxy and sending regular HTTP requests over that secured
connection can the proxy do its own origin-server choosing freely. Some
people have been calling that type of setup "HTTPS", but really the
existence of proxy choice makes it a lot different from traditional
HTTPS on port 443.


> 
> Why do we try to shove it into one hole, if we can divide it into
> separate processes. It may even need some new key or ACL to determine
> that for these connections to create always new tunnels (TLS, ssl,
> certs, ...)
> acl separate_this CreateNewTunnelForNewLink

When ability to generate CONNECTs is added that should not be necessary.
Whether the cache_peer can handle the CONNECT attempt will automatically
determine whether a tunnel is possible as an alternative to
DIRECT/PINNED connection.

Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux