Search squid archive

Re: Splicing a connection if server cert cannot be verified

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Amos,

> > Yes, but Squid has no way of trusting a self-signed cert. When Squid
> > mints a server cert on the fly and sends it to the client, the client
> > won't have any idea that the cert was originally self-signed. Like the
> > previous scenario, I'd want to step out of the way and defer the
> > decision to the client.
> >
> 
> The global list of CAs which non-self-signed certs validate against is explicitly
> loaded into the SSL library. It is not a built-in list.
> 
> All you have to do to trust a "self-signed" cert is add the CA signing it to your
> trusted set.

I don't think you quite understand what I'm saying here. I can't run around and add new self-signed certs to Squid every time users discover broken sites; even worse, for SSLv3 connections, there's nothing "to add" that will fix future connections apart from permanently exempting the target site from bumping.

I want to bump connections ONLY when I can do so reliably, i.e. when I trust the server's cert and the connection is not SSLv3. In all other cases, I want to bail out and let the client decide what to do. The self-signed cert might be expected and trusted by the client depending on the target site. This way, my proxy isn't degrading connectivity for users and I get extra visibility into most connections by applying bumping opportunistically.
 
> >> AIUI, the basic problem that "precludes bumping" is that in order to
> >> peek at the serverHello some clientHello has to already have been
> >> sent. Squid is already locked into using the features advertised in
> >> that clientHello or dying - with no middle ground.
> >> Most times the client and Squid do not actually have identical
> >> capabilities so peeking the serverHello then either bump or splice
> >> actions will break depending on which clientHello Squid sent.
> >
> > I don't see why that is a problem if you just recreate the connection
> > to the server. That is, you first try bumping the connection by
> > sending a new clientHello to the server, and if the server cert cannot
> > be verified, a new connection is established and the original
> > clientHello is sent to the server.
> >
> 
> "just" recreating the connection to the server means discarding the old one.
> Which is not anywhere near as nice a proposition once you look beyond the
> single proxy.
> 
> The details, you can skip if you want to avoid...
> 
> * Each aborted connection means 15 minutes TCP TIME_WAIT delay before
> that TCP socket can be re-used.
> 
> * TCP/IP limits software to 63K sockets per IP address (64K total with
> 1024 reserved).
> 
> Using multiple outbound connections to discover some behaviour is what the
> browser "happy eyeballs" algoithm is all about. They are just looking for
> connectino handshake speed rather than cert properties.
> 
> - - Browsers are operating on rate of 10's to hundreds of single requests per
> minute. With all 64K per-IP sockets on that machine dedicated to the one
> end user.
> 
> - - Proxies are individually dealing with requests on the rate of 10's or 100's of
> thousand requests per minute. Sharing socets between hundreds or
> thousands of end-users.
> 
> At that speed 64K sockets per IP address are consumed very quickly already.
> Squid is limited to a very few over 10K new connections/minute per IP
> address on the machine. We get away with higher rates by having HITs,
> collapsed-forwarding and by multiplexing requests onto server connections.
>  => multiplex is the biggest gainer which is not possible with HTTPS traffic,
> despite bumping.

Like you just mentioned, Squid's clever socket sharing algorithms cannot be used for HTTPS traffic, so I don't quite understand the argument you're making here. If Squid has to terminate a connection because it doesn't trust the target server due to a bad cert, that connection will be subject to the TCP TIME_WAIT delay anyway. Squid might as well note the fact that the site is broken in a cache somewhere, so that the next time a connection is established to that same host, Squid won't try to bump the connection.

> It is a little bit safer to allow Squid to use SSLv3 to servers which are still
> broken, than to allow clients to SSLv3 to Squid. At least half of the
> connectivity you have soemthign to do with becomes trustworthy even
> though the overall end-2-end security is no better (yet).

Perhaps, but then again, not really. :) It gives the user a false sense of security when it looks like they are connected to a server with TLS 1.2 when half of the connection is in fact done over SSLv3.

Let me try to reset the conversation here a bit: 

Given my goal of opportunistically bumping connections and avoid degrading connectivity for the users of the proxy in cases where I can't successfully bump a connection, what would you do to achieve this?

Thanks,
Soren

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux