Search squid archive

Re: More host header forgery pain with peek/splice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Do I understand it correctly that Squid in normal proxy mode
allows malware to do a CONNECT to any destination, while in
transparent proxy mode does extra security checks which causes
some regular (non-malware) clients to fail?

And philosophical questions: is Squid the right tool
to stop malware?  If yes, is it acceptable that connections
of regular (non-malware) clients are wrongly dropped?

IMO Squid should do all it can to be a secure proxy.
Doing security checks on connections in an attempt
to stop malware sounds like a job for an antivirus / IDS tool.

Marcus


On 08/30/2016 01:01 PM, Amos Jeffries wrote:
On 26/08/2016 6:34 a.m., reinerotto wrote:
Hack the code. Because it is even worse, as firefox for example does not obey
to the TTL.


It is not that simple. The checks are there for very good reason(s)
related to security of the network using the proxy.

The Host forgery issue being checked for allows network firewall rule
bypass, browser same-origin bypass, and browser sandbox bypass - in a
way which places the attacker in control of what logs you see [aha!
invisible access to the network]. With all the related nasty
side-effects those allow. There is both malware and services for sale
around the 'net that take advantage of the attack to do those bypasses.
=> Simply disabling the check code is a *very* risky thing to do.


The cases where Squid still gets it wrong are where the popular CDN
service(s) in question are performing DNS actions indistinguishable to
those malware attacks. If Squid can't tell the difference between an
attack and normal DNS behaviour the only code change possible is to
disable the check (see above about the risk level).


FYI: I have a plan to reduce the false-positive rate from DNS rotation
effects. But that requires some deep redesign of the DNS code, which I'm
intending to do as part of the Squid-5 roadmap to avoid further
destabilizing 4.x while its in beta.

For now the workarounds are:

* obey the requirement that destination NAT (if any) is performed only
on the Squid machine.

* to tune the lifetime for persistent client connections. That reduces
(but not fully) connections outliving DNS rotation times and thus
causing requests to have different ORIGINAL_DST from what DNS says.

* if wanting Google 8.8.8.8 service as your resolver. Use a local DNS
recursive resolver shared by Squid and client which points to that
service as its parent/forwarded resolver. That removes the issue with
every 8.8.8.8 response having different reply IP values (so client and
Squid doing near simultaneous lookups get different IPs).

Amos

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux