Hello all, I need to run multiple web servers off of a single public IP (home network behind a NAT firewall). The solution I've devised involves having a the firewall forward all attempts to access port 80 of the public IP to an internal machine running squid in httpd_accelerator mode on port 80. It then uses squirm to redirect requests to the appropriate internal system based on the hostname requested. So far this approach has worked, with just one hangup that I've noticed. So I have two questions: 1) Is there a more appropriate solution, or am I on the right track? 2) I had expected the httpd_accel_host setting in squid.conf to cause it to default to that host in the absense of a redirection instruction. However, when I do this I get an Access Denied page back from Squid for the url. This happens when I point my browser to the IP of squid's interface or to a dns name that resolves to squid's IP but does not have a redirection rule. Once I an explicit redirection it works fine. Here are the relevant snippets from my squid.conf... httpd_accel_host 127.0.0.1 httpd_accel_port 80 httpd_accel_single_host off httpd_accel_uses_host_header on I did notice this comment above httpd_accel_single_host... # Note that the mapping needs to be a # 1-1 mapping between requested and backend (from redirector) domain # names or caching will fail, as cacing is performed using the # URL returned from the redirector. Is this what I'm running up against? Is it possible to avoid this by having squid just redirect without caching? Or am I just going to need to make sure I add a redirect entry for every name that resolves to squid's IP (or a catch-all at the bottom of my list of redirects)? Thanks, --Brad