Search squid archive

Re: Working peek/splice no longer functioning on some sites

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/09/19 8:44 am, torson wrote:
> For me it works with "ssl_bump peek step1", not with "ssl_bump peek all".
> 

That tells me that your clients are lying to your proxy.

"peek step1" means only the client-provided detail is available. eg the
client says it is going to example.net (a domain which you allow) but
actually goes to othersite.example.com.

"peek all" means Squid also checks at step 2 against the server
certificate. eg Squid now sees the "othersite.example.com" detail and
rejects/terminates the bad client.



> My working config using Squid 4.8:
> ---
> visible_hostname squid
> debug_options ALL,1
> positive_dns_ttl 0
> negative_dns_ttl 0

Minimum value for negative_dns_ttl is 1. positive_dns_ttl must be
*greater* than negative_dns_ttl.

> client_persistent_connections off
> http_port 3128
> http_port 3129 intercept

You are missing the basic DoS and cross-protocol smuggling protections
provided in the default squid.conf.

Without those default rules your proxy is far more vulnerable to DoS
attack than it needs to be. This setup it will perform the complex and
slow regex ACL checking for each DoS request arriving. The defaults are
very fast port/method matches.

I suggest you extend the logformat to record the %ssl::<cert_subject
details from server certificate. Then decide whether (and how) to adjust
the server_name ACL rules.

You may want to use %ssl::<cert_errors as well if the problem is due to
server validation errors.



> acl allowed_http_sites dstdom_regex "/etc/squid/allow_list.conf"
> http_access allow allowed_http_sites
> https_port 3130 intercept ssl-bump \
>   tls-cert=/etc/squid/ssl/squid-ca-cert-key.pem \
>   options=SINGLE_DH_USE,SINGLE_ECDH_USE,NO_SSLv2,NO_SSLv3 \

SSLv2 related settings are obsolete in Squid-4. Even ones disabling it.

>   tls-dh=/etc/squid/ssl/dhparam.pem
> acl SSL_port port 443
> http_access allow SSL_port
> acl allowed_https_sites ssl::server_name_regex "/etc/squid/allow_list.conf"
> tls_outgoing_options cafile=/etc/ssl/certs/ca-certificates.crt
> acl step1 at_step SslBump1
> ssl_bump peek step1
> ssl_bump splice allowed_https_sites
> ssl_bump terminate all
> http_access deny all
> logformat general      %tl %6tr %>a %Ss/%03>Hs %<st %rm %ssl::bump_mode %ru
> %ssl::>sni 
> access_log daemon:/var/log/squid/access.log general
> ---
> 
> One thing to note are the "positive_dns_ttl 0" and "negative_dns_ttl 0"
> directives ; my findings are that DNS caching needs to be set to zero in
> cases where DNS records get changed every minute due to roundrobin combined
> with hosting in environments where record changes faster than TTL - on AWS
> where you're hitting different DNS servers with each having a different TTL.
> I was getting a lot of host forgery errors before setting those to 0.
> This is in addition to all the servers using the same DNS address.
> 

IMO you are misunderstanding what the problem is and causing yourself
potentially worse problems in the long run.

FYI: Round-robin DNS has almost nothing to do with this issue. Under
round-robin the IPs Squid is checking for are still present, just not
first in the set. So the Host verify passes.
 The only part it might have is when the IP address set is too big for
the DNS packets your network (and your upstreams) allow. EDNS and
Jumbogram support resolves these issues entirely.


 The problem is that the DNS responses from the providers are *entirely*
different from one lookup to the next. This is guaranteed to happen when
Squid and the client are using completely different DNS resolver chains
for their lookups.

 -> If the DNS resolver difference is within your network, you need to
fix *that* instead of hacking away at the DNS cache in Squid to violate
DNS protocol requirements. The client and other DNS caches in your
network are still using those TTLs properly - so you are just shifting
the verify failure from one HTTP transaction to another.

 -> If the DNS chain difference is at the provider end (as it is for
Akamai CDN, and anyone using Google resolvers). Then you need to
*reduce* the number of out-of-sync lookups being done by both Squid and
the client. This TTL hack is the exact opposite of what you need. Best
workaround is to have your internal resolver setup to extend the TTLs
for the known problematic services.


Setting negative_dns_ttl to value less than 1 second causes every
*internal* use of the IP address by Squid to require a new DNS lookup.
This can result is weird behaviour like the arriving request http_access
rules matching against one IP, the server being connected to having a
second IP and the log entry recording yet another one.

Ignoring the proper TTLs can also result in your server being detected
by anti-DoS protection at your upstream DNS provider(s) and blocked from
service entirely.

You have been lucky that your config is so simple it does not do more
than 2 DNS queries per request, and/or that your traffic volume is so
low the premature repeat queries are not being noticed (yet).


FWIW: the use of "client_persistent_connections off" which you have does
more to address the problem those CDN create. It prevents clients
re-using a TCP connection setup with now-stale DNS details. Boosting
Squid's chance of seeing the same DNS the client has.


HTH
Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux