Thanks again Darryl, I have looked at the httpd_accel_host virtual option, but figured this only worked for single backend servers. Which is why I figured it should work as a standard proxy for a small number of 'public sites'. Just whack it in backwards huh? Should do the trick! Our application servers don't have public ip's we have a single public on our firewall and everything else is private. I had a look at the redirect_rewrites_host_header and figured I definitely didn't want that! She certainly is a powerful beast, but some parts of the config seem sparse in their purpose definition - or maybe it's just me? The function I most want is the proxying, as redirection will cause the return traffic to go via the firewall directly and all will break. Will give the uses_host_headers a crack. It looks like it should do the job! It was the single ip that made the host header a requirement. Just more of a pain when going to multiple servers rather than sites on a single box. Thanks for your input, it has been greatly appreciated. John -----Original Message----- From: Darryl L. Miles [mailto:darryl@xxxxxxxxxxxx] Sent: Tuesday, 16 August 2005 8:27 p.m. To: John Rooney Cc: squid-users@xxxxxxxxxxxxxxx Subject: Re: FW: reverse proxy John Rooney wrote: >The firewall provides NAT translation and forwards the http traffic to >the squid proxy. > >What I'm doing is putting the Squid into a DMZ and trying to use proxy >facilities to host the internal boxes. That way only the squid is able >to be compromised. So there is a mix of general web http traffic and >some with ntlm or chap authentication. > > This NTLM and CHAP authentication is for your customer/client base to access your application servers. >The features I'm wanting fall neatly outside of both definitions you >supplied, as it is purely to isolate our production application servers. >All inbound traffic hits the squid and using host headers to identify >the appropriate internal server. Proxies the connection to the right >one. Nothing to do with load balancing or acceleration, and not an >outbound function. Which is why I refer to it as a reverse proxy. > > > Understood in what you are saying. Your usage is very solidly what squid terms HTTP accelerator. You are fronting websites not providing a shared cache for a browser client network. Recommendations: * Ensure that the squid box can use DNS to hit each of the different target application servers. * Check out the "http_accel_host virtual", squid has two basic mode of operation, fixed backend application server IP address or work it out from the inbound customers Host: header (virtual). * If you want apache style log output for full analysis of the real hits you are taking, then checkout the patch at http://devel.squid-cache.org/customlog/ * Config may look something like this: http_port 123.123.123.112:8581 acl services dst 123.123.123.0/24 10.99.81.0/24 http_access deny CONNECT # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS http_access allow services # This one following maybe wrong (even in my config) ... http_access allow all visible_hostname cache-fe.domain.com unique_hostname a2cpu1.cache-fe.domain.com httpd_accel_host virtual httpd_accel_port 0 httpd_accel_single_host off header_access X-Cache deny all httpd_accel_uses_host_header on I use it in single_host=on mode, so I don't know what all the stuff about "See also redirect_rewrites_host_header" is, you'll need to play with getting squid to hit the right real server. In the above config has got a good chance of working: * Providing your application servers are still configured to answer to their public IP address themselves. * Providing your firewall intercepts all the public traffic to those IPs and sends it to squid (after having NATed it first) * Providing squid's connectivity to the public IP addresses of the application servers is not distrupted by the firewall. * Providing the squid hosts sends its default route back through the firewall [internet] <-> [firewall] <-> [squid] <-> [application servers] should be dandy if you are using different network interfaces between each... if not then policy routing may help you out. Not sure if NT can do that. Another situation maybe that you have squid bind to any IP address on TCP port 80 and you have multiple IP addresses (though IP aliasing) on the same ethernet card. Each of these IPs will be the public IPs of the respective websites you are fronting. Then you are left in a situation that squid knows exactly which backend host the intended request is for based on the IP your customer connected to you one, but I really can't suggest how you go about setting up the mapping. Best of luck... :) -- Darryl L. Miles