Hi Darryl, The firewall provides NAT translation and forwards the http traffic to the squid proxy. What I'm doing is putting the Squid into a DMZ and trying to use proxy facilities to host the internal boxes. That way only the squid is able to be compromised. So there is a mix of general web http traffic and some with ntlm or chap authentication. The features I'm wanting fall neatly outside of both definitions you supplied, as it is purely to isolate our production application servers. All inbound traffic hits the squid and using host headers to identify the appropriate internal server. Proxies the connection to the right one. Nothing to do with load balancing or acceleration, and not an outbound function. Which is why I refer to it as a reverse proxy. John -----Original Message----- From: Darryl L. Miles [mailto:darryl@xxxxxxxxxxxx] Sent: Tuesday, 16 August 2005 7:45 p.m. To: John Rooney Cc: squid-users@xxxxxxxxxxxxxxx Subject: Re: FW: reverse proxy John Rooney wrote: >Under normal conditions, the firewall will forward all http traffic to >squid and that will reverse proxy to the other web hosts on the network. >I don't see it as http acceleration, but it would appear that is how the >squid community refer to it. Essentially it is host redirection that I >want but I want it with proxying as well. > > > Explain this, its not clear what you mean. WHAT SORT OF HTTP TRAFFIC ? and how is the traffic forwarded, is the firewall doing any form of NAT with it ? There are two main situations: Forward Proxy : Firewalls can forward traffic to squid by "intercepting" traffic that that browser client thinks is really coming from the target website. This at a TCPIP level is often called "transparent proxy" and requires appropiate NAT support to be setup at the squid host to translate the IPs of the target site. Reverse Proxy : (Http Accelerator) this is where you might have a load balancer device that allows you to host your high traffic / availability website through a virtual IP (VIP). This is an IP address that a router like device announces on your network that your public facing interface hosts the website from, any connection to the IPs again under goes network address translation (NAT) so that you can hit a specific host in your cluster allowing you to load balance the trasffic across your cluster. The hosts in the cluster must have their default route setup to go back through the load balancer device (so it can de-NAT). So... which is it ? Are you proving a shared cache for a browser client network to reduce your inbound IP transit volume ? Or are you providing a cache to front your high volume / high availabilty website ? -- Darryl L. Miles