Search squid archive

Re: Strange Interaction between Squid and Facebook

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Patrick,

Choosing CentOS 7.1 is a great choice.
First thing is to let you know I am packaging squid RPMs for CentOS 7 and since the next version of squid(3.5.11) will be out in a few days it will be published later next month.

About the issue itself.
Couple questions?
Are you running\using ssl-bump or not?
When a client request is being aborted what do you see in the squid access.log? From what I know facebook works in HTTPS and an abort can be mainly because of a network issue or application level issue. Since I am working with squid 3.5.10 and I do not have this issue with ssl-bump ON or OFF it is unclear what is causing the issue.

I couldn't understand how you ran the tests.
I do understand that you have two proxies and one is peering to the other, right?

Thanks,
Eliezer

On 29/10/2015 21:44, Patrick Blair - Peapod wrote:
Hi All,

I apologize for the length of this post, but I'm really at my wits' end and
am completely out of ideas as to how I might fix this or why this is
happening.

Background and Architecture:
We are using Squid as our user internet access proxy, it is performing
authentication via LDAP if a site other than what is on the allowed list of
domains is accessed. We are running the proxy at our secondary datacenter
and routing all user internet traffic out through the links on that end, so
that user traffic does not "conflict" with our main website traffic at our
primary datacenter, as we don't yet have separate circuits for each type of
traffic. We have hundreds of users accessing the proxy on a daily basis.
We were running squid 3.3.12 on Solaris 10 11/06 SPARC, with a fairly basic
configuration, I can provide it if necessary, but everything was working
"fine" as far as we were able to tell (no complaints from the users). The
only problem was that the physical hardware is very old and needed to be
retired, as well as being a box we couldn't easily add more resources to.

So I set up a new proxy, running Centos 7.1 x86_64 on a VM (in VMware ESXi)
with the latest Squid release (3.5.10) built from source, with a
configuration along the lines of
http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster. We moved to a VM
so we would have greater flexibility and be able to add more resources
easily if the server became stressed.
It worked very well in our testing, and there were no real issues that we
were able to see, so we rolled it out to replace the old proxy.

Since then, everything has been working fine, apart from one site,
Facebook, not loading correctly. It varies based on the particular browser
accessing it, but some/most of the style sheets or content don't appear to
load correctly, resulting in a very mangled look. Running the developer
tools for each respective browser give us information that the connections
to the stylesheets or content are aborted. Most of this content is hosted
on fbcdn.net, which we've made sure is on our allowed domains list.
What is interesting is that the first object (or multiple ones) is
retrieved successfully through the proxy, but subsequent ones are denied.
Again, this appears to vary depending on browser/os combinations, with
Chrome on OSX and IE (on Windows) seeming to work the best.

We are NOT running a Tproxy or any other sort of intercepting proxy, all
the clients are explicitly aware of the proxy's existence through a .pac
configuration file pushed out through Group Policy.

I've tried disabling ssl_bump (which shouldn't be enabled anyway) for the
facebook domains, setting cache deny for those domains, and setting always
direct for those domains, none of which has had any effect.

I've also tried reverting to a more "simple" config, even the exact config
that we were using on the "old" Squid that was working on Solaris, but that
too fails. I've also changed from using Squid 3.5.10 to the version
packaged by CentOS (squid-3.3.8-12.el7_0.x86_64), tried this with both
configurations to no avail.

The only thing that has worked is setting up a "test" squid at our primary
datacenter, same configurations, but this one does work. We've checked and
verified that there are no custom routes or any other network
configurations that vary between the servers, only IP addresses. Both are
on unrestricted vlans that allow direct access to the internet. We are
checking with our networking team to see if there is any custom routing
that is in place on their end, but it's very doubtful that is the case.

I believe I've covered everything here, I can provide any other information
or configurations if necessary (I didn't provide those here because of the
length already). If anyone out there has encountered this issue, I would
GREATLY appreciate any information or troubleshooting assistance you could
provide.


Best Regards,

Pat Blair
Sr. Unix Administrator
Peapod, LLC
pblair@xxxxxxxxxx



_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users


_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux