Search squid archive

Re: SslBump Peek and Splice using Squid-4.1-5 in Amazon1 Linux with Squid Helpers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/12/18 6:53 am, Enrico Heine wrote:
> Dear Mike,
> 
> Please checkout the following and let us know if you need further help.
> 
> http://www.squid-cache.org/Doc/config/sslproxy_cert_error/
> 

Before you use it though, please consider what the words "Certificate
does not match domainname" actually *mean*.

This Squid is configured to deliver a single specific custom-built
certificate to all clients who contact the proxy. Yet the proxy is being
used to receive TLS traffic for any domain and the admin is passing test
traffic for multiple different domains and raw-IP addresses.



> Best regards,
> 
> Flashdown
> 
> Am 11. Dezember 2018 16:41:56 MEZ schrieb Mike Quentel:
> 
>     Hi, I have been unsuccessfully trying to get Squid-4.1-5 in AWS
>     (Amazon 1 Linux) to allow transparent proxy of certain domains, as
>     well as IPs associated with those domains, whilst rejecting everything
>     else.
> 
>     I have been referencing documentation at
>     https://wiki.squid-cache.org/Features/SslPeekAndSplice
> 
>     Version of Squid: 4.1-5 for Amazon 1 Linux available at
>     http://faster.ngtech.co.il/repo/amzn/1/beta/x86_64/ (many thanks to
>     @elico for these packages) specifically, the following:
> 
>     1) http://faster.ngtech.co.il/repo/amzn/1/beta/x86_64/squid-4.1-5.amzn1.x86_64.rpm
>     2) http://faster.ngtech.co.il/repo/amzn/1/beta/x86_64/squid-helpers-4.1-5.amzn1.x86_64.rpm
> 
>     Example of tests that I am running:
> 
>     1) curl -kv https://service.us2.sumologic.com (EXPECTED: successfully
>     accessed; OBSERVED: successfully accessed)

The TLS SNI contains "service.us2.sumologic.com", and
 - the server produced an X.509 certificate for that domain, and
 - your server_name ACL matches it as a sub-domain of ".sumologic.com"


Note that the -k parameter for curl only disables security on the
curl<->Squid TSL connection. It has nothing to do with the
Squid<->origin connections.

You should really be using "curl --cacert /etc/squid/squid.pem" or
connections without the -k to test what actually happens for clients
when their traffic goes through your system.


>     2) curl -kv https://54.149.155.70 (EXPECTED: successfully accessed
>     because it resolves to service.us2.sumologic.com; OBSERVED:
>     "Certificate does not match domainname"  [No Error] (TLS code:
>     SQUID_X509_V_ERR_DOMAIN_MISMATCH))

IMO the expectation is what is wrong here.

The TLS SNI does not exist, and
 - being intercepted traffic the CONNECT authority is
"54.149.155.70:443", and
 - the server produced an X.509 certificate with SubjectName of either
"54.149.155.70" or something else not matching your server_name ACL entries.

FYI: server_name is a text-string matching ACL. I expect you will find
there is no reverse-DNS being performed during the ssl_bump testing,
only later after contact the server has already been decided to allow.
You can confirm that with the debug log your test produced. Look for the
lines saying what each ACL is checking for and against.


>     3) curl -kv https://www.google.com (EXPECTED: failed to access;
>     OBSERVED: failed to access)

"failed to access" is a gross over-simplification. This transaction is
both allowed and not-allowed at the same time.

If you look into the log I expect you will see this sequence happening:

 * the http_access rules *allow* the CONNECT tunnel, then

 * the ssl_bump rules select do "bump" action at Step-2 (aka. using only
the TLS clientHello details), then

 * curl -k ignores the small problem that you are not presenting the
X.509 keys belonging to Google.

 * the decrypted GET request inside the tunnel gets rejected because:
 - the "allowed_https_sites" ACL has no X.509 server details to test
against, so does not match.
  - the "allowed_http_sites" ACL does not match either
  - the "http_access deny all" matches everything reaching it.


>     4) curl -kv https://172.217.13.164 (EXPECTED: failed to access;
>     OBSERVED: "Certificate does not match domainname"  [No Error] (TLS
>     code: SQUID_X509_V_ERR_DOMAIN_MISMATCH))

Same thing going on as for test (2).


> 
>     Below is the latest version of the squid.conf being used. Apologies
>     for any obvious errors--new to Squid here. I have been grappling with
>     this for weeks, with many iterations of squid.conf so any advice is
>     greatly appreciated; many thanks in advance.
>     ------------------------------------------------------------------------
>     visible_hostname squid

You have connected this proxy to the Internet. The above is required by
Internet RFCs to be a FQDN (fully qualified domain name).

Even if you do not want to follow that requirements it MUST be a unique
name.  If any of your HTTP traffic ever goes through another proxy
sharing this *very common* config mistake you will encounter forwarding
loop errors.


> 
>     host_verify_strict off

This is the default. No need to configure it.

Also, if you added that because the errors you mentioned are talking
about domain verification - be aware that HTTP "Host:" header
verification is quite a different thing from TLS certificate verification.


> 
>     # Handling HTTP requests
>     http_port 3128
>     http_port 3129 intercept
> 
>     sslcrtd_children 10
> 
>     acl CONNECT method CONNECT
> 
>     # AWS services domain
>     acl allowed_http_sites dstdomain .amazonaws.com
>     # docker hub registry
>     acl allowed_http_sites dstdomain .docker.io
>     acl allowed_http_sites dstdomain .docker.com
>     acl allowed_http_sites dstdomain www.congiu.net
> 
>     # Handling HTTPS requests
>     # https_port 3130 intercept ssl-bump generate-host-certificates=on
>     dynamic_cert_mem_cache_size=100MB cert=/etc/squid/squid.pem
>     https_port 3130 intercept ssl-bump dynamic_cert_mem_cache_size=100MB
>     cert=/etc/squid/squid.pem

FYI: both the lines above behave identical because the generate-*
setting you removed was being set to its default value anyway.


>     acl SSL_port port 443
> 
>     # AWS services domain
>     acl allowed_https_sites ssl::server_name .amazonaws.com
>     # docker hub registry
>     acl allowed_https_sites ssl::server_name .docker.io
>     acl allowed_https_sites ssl::server_name .docker.com
> 
>     # project specific
>     acl allowed_https_sites ssl::server_name www.congiu.net
>     acl allowed_https_sites ssl::server_name mirrors.fedoraproject.org
>     acl allowed_https_sites ssl::server_name mirror.csclub.uwaterloo.ca
> 
>     # nslookup resolved IPs for collectors.sumologic.com
>     # workaround solution to support sumologic collector
>     acl allowed_https_sites ssl::server_name .sumologic.com
>     # THE FOLLOWING TWO LINES DO NOT SEEM TO WORK AS EXPECTED

The expectation is wrong here.

The string "sslflags=DONT_VERIFY_PEER" is not a valid domain nor server
hostname. So highly unlikely that the X.509 certificate SubjectName or
AltSubjectName from the origin server will contain that string.

Also, the flag sets the ACL matching algorithm. When that is set the ACL
cannot match during a ssl_bump "peek step1" cycle.

So one should expect this ACL to stop working when these lines are
added. Not expect that it would do anything useful.


FYI: The string "sslflags=DONT_VERIFY_PEER" is the name and value of a
option other directives elsewhere in squid.conf can use. But it is a
very, very, very bad idea to do so - even 'just for testing'.

 The flag DONT_VERIFY_PEER disables all of TLS security checks - meaning
the connection actively becomes *less* safe than regular/plain-text TCP
connections, while simultaneously hiding all resulting issues from
*your* admin view. Users still have problems, you just cannot see any
hint of them.
 So please purge that setting from any configs and documents you come
across. Investigate and fix any TLS problems that appear, don't just
hide the error messages and pretend everything works.
 Same reason not to be using the equivalent "curl -k" option for testing
TLS validation/verification problems.



>     # acl allowed_https_sites ssl::server_name --server-provided
>     service.sumologic.com sslflags=DONT_VERIFY_PEER
>     # acl allowed_https_sites ssl::server_name --server-provided
>     service.us2.sumologic.com sslflags=DONT_VERIFY_PEER
> 
>     acl step1 at_step SslBump1
>     acl step2 at_step SslBump2
>     acl step3 at_step SslBump3
> 
>     ssl_bump peek step1 all

The "all" here is useless and only adds confusion to anyone who thinks
it has any meaning.


>     ssl_bump peek step2 allowed_https_sites
>     # http://lists.squid-cache.org/pipermail/squid-users/2018-September/019150.html

The author of that did not understand how the ssl-bump processing was
working. That entire message thread is them attempting to learn and IMO
still not quite understanding the ideas in the end.

Blindly copying into your config from experiments by someone who does
not understand what they are doing is not a good idea. Use an actually
known-working config example (the Squid wiki has several), or try to
design your own based on your own understanding. At the very least we
can see from your self-designed attempt what you may be thinking and
hopefully teach you where any mistakes are visible.


>     ssl_bump bump

Please be aware that when this line when reached at step2 it performs
"client-first" bumping of the TLS.

That means bumping and performing TLS handshake without any real X.509
server details for your allowed_https_sites ACL to use. Only
client-provided claims about what server they are contacting (which may
be outright lies). This has side-effects on what your ssl::server_name
vs dstdomain ACL do in the later http_access checks.

Specifically when the server_name and the URL domain are different
things a simple as testing one first can change permissions for the
client in the other.


>     ssl_bump splice step3 allowed_https_sites

So some of your traffic will splice from the above line - but only
because of the peek (step2) then bump (step3) combination being impossible.


>     ssl_bump bump

There is already an unrestricted "bump" action earlier. This line does
nothing even if it were possibly reached.


>     ssl_bump terminate step2 all

There is a peek action specified earlier for step2, with an unrestricted
bump action as a fallback when allowed_https_sites fails to match. This
line is never reachable.

> 
>     http_access allow CONNECT

Ouch. Really do not do the above. The default config file shipped with
Squid starts with these lines for very good reasons:

"
  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
  http_access allow localhost manager
  http_access deny manager

  #
  # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
  #
"

Those reasons (DoS and proxy relay security vulnerabilities) are still
very much relevant in your setup, with no reason to remove them. So
please add them back before you continue testing things, with your
custom http_access rules *underneath* that comment line.

Also, the default SSL_ports is already setup in a way that meets your
requirements. You can adjust Safe_ports to be only the same port, or use
the default set of safe-for-HTTP ports.

FYI: the config you have right now allows any malicious origin server
receiving trafic on port 443 to present a X.509 certificate claiming to be.


> 
>     # http_access allow SSL_port
> 
>     http_access deny CONNECT !allowed_https_sites
>     http_access deny CONNECT !allowed_http_sites

The above two lines do nothing in your current config. CONNECT requests
are *always* allowed by the line you had earlier.

Once you move back to the default security checks these will start to do
things. It would probably be best to remove the two lines above to
prevent the unexpected new behaviour from confusing you further.


>     http_access allow allowed_https_sites
>     http_access allow allowed_http_sites

These ACLs are very badly named.

* The one called "allowed_https_sites":
 - will *not* match against HTTPS traffic arriving on port 3128 unless
the CONNECT authority-uri names a domain in your list.

- *will* match against HTTP traffic on port 3128 and port 3129 with
"https://"; URLs.

* The one called "allowed_http_sites" *will* match against HTTPS traffic
arriving on any port.

 --> meaning that for both of them the "https" and "http" word in their
name is deceptive.

* Both these ACLs are used to *deny* traffic.
 --> meaning the "allowed" word in their name is deceptive.

What you are left with is just "sites" which is so vague as to be
meaningless.


It would be a lot clearer if you renamed them:
 - "allowed_https_sites" to "tls_servers"
 - "allowed_http_sites" to "url_domains"


>     http_access deny all
> 
>     cache deny all
> 
>     debug_options "ALL,9"

Cheers
Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux