On 11/04/2014 3:56 a.m., Strafe wrote: > Can someone please advise me about a problem that I have. > > I'v deployed Squid3 Reverse Proxy Server. I have one server in the internal > network that is forwarded through Squid. > > My external network is called earth.com and the internal server is called > moon. > I'v managed to set Squid to forward packets that come for moon.earth.com to > internal server moon which is on IP 192.168.1.10 > > My question is: How can I setup Squid to forward packets when it receives > http://www.earth.com/moon instead of the current setup - > http://moon.earth.com Firstly, Squid does not forward packets. Squid forwards HTTP messages. What you ask can be done in several ways ... 1) Stick with the current setup mapping only the domain names. It is both the Right Way to map HTTP requests between domains and avoids many complications and problems. For your existing config: * drop the cache_peer_domain directive * Make the internal server aware that it is servicing the public domain www.earth.com and the entire HTTP ecosystem will operate as designed by RFC 2616 specifications. ** Virtual hosting is your friend ** 2) If you can, setup a URL redirection which forwards clients to the new URL. This is the Right Way to make a URL point at a different URL in HTTP. a) Squid-3.2 and newer can setup a redirection like so: acl foo dstdomain moon.earth.com deny_info 302:http://moon.earth.com/moon%R foo http_access deny foo b) Squid-3.1 and older you need to setup a URL redirector to emit 30x status code and the new URL: acl foo dstdomain moon.earth.com url_rewrite_program /somescript url_rewrite_access allow foo Doing (2) in either form requires that both www.earth.com and moon.earth.com are visible and available to the public clients. 3) if you really have no choice at all you. And I mean that very seriously - this is an absolute last resort. You can setup a URL-rewriter program like (2a) above, but omitting the 30x status code. This changes the URL delivered to the peer server but leaves the client thinking its accessing moon.earth.com URLs. When undertaking (3) you had best audit your site code, contents, and user abilities to ensure that: * server MUST NOT generate any absolute-URLs using the domain which the webserver is asked for. * page content MUST NOT use any relative-URL beginning with '/' characters. Unless they are prefixed /moon by the origin server. * Content-Disposition headers meet the criteria above * Content-Location headers meet the criteria above * Location headers meet the criteria above ... anything containing URLs must meet the absolute-URL and relative-URL criteria. These conditions apply to URLs *anywhere* in the HTTP protocol and content objects emitted by the web server application. Generated and static content alike is affected, as are compressed archives with URL "file" meta tags and DB entries of user input URLs. *anything* with URLs. If any one of these conditions on URL-rewriting are missed during your audit, or future site changes cause one to be untrue. Then the clients receiving traffic with those broken URL details will encounter issues of various types trying to access or use them. HTH Amos