Search squid archive

Re: RE: Reverse Proxy Configuration redirects to HTTP rather than HTTPS [NOT PROTECTIVELY MARKED]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/10/2013 8:10 p.m., John Gardner wrote:
This email has been classified as: NOT PROTECTIVELY MARKED

I wonder if someone can help me out with an issue that has come to light with a new application we are running behind our Squid 2.6 Reverse Proxy >Server.
At the moment we have a situation shown below;
INTERNET ---> |FIREWALL1| ---> |REVERSE-PROXY| ---> |FIREWALL2| --->
APPLICATION WEB SERVER
For all applications, Traffic comes in on HTTPS (and HTTP as well, but mostly HTTPS) from the Internet, passes through FIREWALL1 and then offloads the >SSL at the REVERSE-PROXY, then the rest of the traffic flows as HTTP through FIREWALL2 and onto the APPLICATION WEB SERVER.
This works for all of the sites we've been serving for the past two
years, but for this particular new application, if you connect using
https://my.server.com when the app redirects, Squid appears to go to
http://my.server.com i.e. it does not stay encrypted.  I've found a
similar problem >in this post using mod_proxy
(http://serverfault.com/questions/388927/apache-reverseproxypass-redrec
ts-to-http-rather-than-https) Can you point me in any direction to
assist with this solution please?
Amos

Thanks for the response.  Squid is configured as; http://wiki.squid-cache.org/ConfigExamples/Reverse/SslWithWildcardCertifiate

So the exact config is;  (All of the specific details have been obfuscated)

https_port 192.168.1.43:443 cert=cert.crt key=key.pem cipher=ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM options=NO_SSLv2 defaultsite=mywebsite3.mydomain.com vhost

cache_peer 10.1.0.14 parent 8080 0 no-query originserver name=server_5

acl sites_server_5 dstdomain my.server.com cache_peer_access server_5 allow sites_server_5

What actually happens is that when the browser goes to https://my.server.com 99% of the site is rendered correctly i.e. it works as it should.  There is one link however, which is generated by JavaScript in the application which always comes back as http://my.server.com (not encrypted).   I assumed this was an application problem, but the vendor thinks it's Squid, I was hoping I could force this to HTTPS only using something along the lines of;

It is a problem with the whole backend environment of your servers. The interaction between backend webserver software and the application is ot working. Squid is just trying as best it can to gateway the HTTPS traffic into that environment.

The only difference between HTTP and HTTPS is the existence of SSL/TLS wrappers and port 80/443 usage. Since your setup is using a non-standard port without SSL/TLS wrappers there is no way for the backend server to automatically identify what scheme is being used. Given the absence of SSL/TLS is would be very reasonable to assume http://. You have to configure something there for it to know AND the application needs to be given that information somehow.

Squid is able to pass the "Front-End-Https: on" header to backends. if the similarly named cache_peer option is configured.

NOTE that by sending the backend connection over a link without SSL/TLS you are breaking the assumption of end-to-end security and any observer who can hook into the network behind the proxy can tap into the communications.

acl port80 myport 80
http_access deny port80
deny_info https://my.server.com/ port80

Yes you can try this to workaround the problem. It will hide the broken URLs being emitted by the application, but some clients will still complain to the end-user about insecure content being embeded in secure pages. Also any data Cookies, session ID, personal details, POST/PUT requests which are delivered to the unsecured URLs will be publicly visible on those first redirected requests. So the HTTPS security there is quite broken no matter how "working" it appears to be.


Or is this more an application issue which should be fixed by the vendor?  Any help is greatly appreciated.

This is a problem requiring multiple different changes at various places.
* the application vendror should be using //foo URLs whenever possible. Omitting the explicit scheme http: or https: parts and leaving the client to use the scheme they believe is best.
* the admin of your backend server should,  either
- be configuring it to pass details like the scheme type to applications. Faking https:// when they come from your Squid, or - use port 443 or at east SSL on port 8080 on the backend server with a SSL certificate between Squid and the server. This is private between Squid and the server to can be a self-signed certificate, to be secure the only requirements is that Squid has a copy of the CA used to sign the peers certificate so it can validate.

Amos




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux