Search squid archive

Re: Re: SQUID in TPROXY - do not resolve

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24/10/2013 6:44 a.m., Plamen wrote:
Yes,

this is one of the problems I'm also experiencing,

the customer is using different DNS than the Squid, and he complains because
he says - without your SQUID I can open xxxx web page, but with your SQUID
it's not opening.

Ah. So the real problem is "Why is it not opening for Squid?"

The current releases of Squid *do* use the client provided destination IP. The DNS resolution is only to determine whether the response is cacheable and if alternative IPs may be tried as backup _if_ the client given one is unable to connect by Squid.

IME the usual causes of these complaints is one of:
* routing external server SYNACK packets back to the subscribers machine instead of proxy * using the client/subscribers destination IP, even when it is not routable from Squid * the subscribers custom DNS is resolving a internal domain and their software is sending it globally where it cannot resolve * the destination genuinely has a network outage (subscribers failed to say _yesterday_ [or even last week] was when it worked without proxy) * HTTP headers like X-Forwarded-For with valid IPv6 address *crash* some ASP.NET services
* destination advertises IPv6 addresses which are not responding
* destination rejecting any contact through a proxy (security reasons usually) * proxy config rules rejecting the traffic but subscribers only stating "wont work" instead of "proxy error page"
* ECN protocol differences between Squid box OS and subscribers machine OS,
* TCP window scaling differences between Squid box OS and subscribers machine OS,

Many reasons for "dont work" ... small details matter.

"wont connect" implies more specifically that several of those reasons are more likely than others.


Imagine network with 50000 end subscribers and having to react on similar
cases on daily basis... I am ready to sacrifice whatever benefits are there
with DNS resolving done by SQUID to overcome the above mentioned problems.

Imagine one of them fetched http://google.com/ from a server setup to install malware then redirect to htttp://www.google.com/. If the traffic is not verified every single HIT for http://google.com/ from that point on would get the cached infection delivered, regardless of whether the other clients were actually going to google.com since the HIT means no upstream is used, just the cached malware.
 The only safe choices available are to verify or not to cache.

Amos




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux