On 20/04/2012 7:01 p.m., x-man wrote:
Hello there, I am planning for squid implementation which consists of one main squid that will server all the web except the video sites and second squidbox that will only deal with the video content. As I know I have to use the cache_peer directive to tell the main squid that it has to ask the video squid about a content (it will be based on ACLs).
No cache_peer tells Squid how to setup TCP connections to a peer. That is all.
cache_peer_access "will be" is what tells Squid *which* requests to pass there. The problem you are describing can be the result of not having those ACLs present. The child Squid only re-tries alternative paths if the parent proxy fails to supply a response for the client (ie link outages get routed around).
The problem that I see is that the second squid who is using url_rewriter and local apache script to cache and deliver the video content will always reply with cache miss, to the main squid, because for the squid this is not cached content - as it is maintained by the url_rewriter and apache php script - then the main squid will deliver the content from the internet.
URL re-writer does not "maintain" any part of HTTP. Its sole purpose is to alter the URL for a request before that request gets serviced.
What does Apache have to do with a two-Squid peering setup?
Someone can suggest workaround for this?
Only certain specific types of HTTP "route" failure status causes the main Squid to retry like you describe. You need the URL-rewriter NOT to cause 4xx/5xx errros.
You can disable the retries by using never_direct with exactly the same ACL rules used in cache_peer_access to select the parent cache. What that will do is cause the 4xx/5xx errors caused by your re-writer to be passed to be the client instead of the real video found and fetched.
Amos