Search squid archive

Re: host header forgery false positives

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/01/2016 2:40 p.m., Jason Haar wrote:
> Hi there
> 
> I am finding squid-3.5.13 is false positive-ing on ssl-bump way too
> often. I'm just using "peek-and-splice" on intercepted port 443 to
> create better squid logfiles (ie I'm not actually bump-ing) but that
> enables enough of the code to cause the Host forgery code to kick in -
> but it doesn't work well in a real network
> 
> As you can see below, here's a handful of sites that we're seeing this
> trigger on, and as it's my home network I can guarantee there's no odd
> DNS setups or forgery going on. This is just real-world websites doing
> what they do (ie are totally outside our control or influence)

Unfortunately that is not how the vulnerability works. Any web page you
visit could have an embedded script or advertisement that performs the
Host forgery without your knowledge or ability to prevent.

> 
> I don't know how the forgery-checking code works, but I guess what's
> happened is the DNS lookups the squid server does doesn't contain the
> same IP addresses the client resolved the same DNS name to.

Correct. That is exactly how it works.

> I must say
> that is odd because all our home computers use the squid server as their
> DNS server - just as the squid service does - so there shouldn't be any
> such conflict - but I imagine caching could be to blame (maybe the
> clients cache old values longer/shorter timeframes than squid does).

HTTP persistent connections can last longer than the DNS TTL period. We
have found that if the DNS records rotate while the connection is still
receiving requests those new requests will wrongly fail the validation.

I have a plan for how to fix this case. But it requires some big changes
in the Squid DNS cache to avoid adding a new vulnerability, and I plan
to do that work later this year for the Squid-5 development cycle.


> 
> This is a bit of a show-stopper to ever using bump: having perfectly
> good websites being unavailable really isn't an option (in the case of
> "peek-and-splice" over intercepted they seem to hang forever when this
> error occurs). Perhaps an option to change it's behaviour would be
> better? eg enable/disable and maybe "ignore client and use the IP
> addresses squid thinks are best" could work?

Host validation should not be happening for CONNECT requests or bumped
traffic.

If you can obtain an ALL,9 cache.log trace it might help identifying the
issue. A bug report would be nice as well to track progress.

Amos

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux