> > still using my 4 siblings in proxy-only. > > Works fine for a while... until the digests are exchanged. > > As expected, my logs are full of forwarding loops detected. > > > > The problem is, since the siblings are in 'proxy-only', they do not cache the > looped objects and constantly asks their apache for it. > > The next digests exchanges will fix the current loops, but will create new > loops. > > To solve this, I tried to prevent a squid from querying a sibling on behalf of > another sibling: > > > > example of squid1.conf: > > cache_peer 192.168.17.12 sibling 8000 3130 proxy-only name=squid2 > > cache_peer 192.168.17.13 sibling 8000 3130 proxy-only name=squid3 > > cache_peer 192.168.17.14 sibling 8000 3130 proxy-only name=squid4 > > > > acl from_squids src 192.168.17.12 > > acl from_squids src 192.168.17.13 > > acl from_squids src 192.168.17.14 > > > > cache_peer_access squid2 deny from_squids > > cache_peer_access squid3 deny from_squids > > cache_peer_access squid4 deny from_squids > > > > But it is not helping... > > Any idea? > > Are you not configuring each squid to prefer its parent for that parents > traffic? > > for example; squid1 has: > > cache_peer apache1 parent ... > > acl for_my_apache dstdomain .example.com > > cache_peer_access apache1 allow for_my_apache > cache_peer_access squid2 deny for_my_apache > cache_peer_access squid3 deny for_my_apache > cache_peer_access squid4 deny for_my_apache Right now, we have LVS load-balancing x apaches for the same vhost www.example.com My test squids are on one server (different IPs). My test apaches are on another server (same IPs but different ports...) +---> Squid1 (192.168.17.11:8000) ---> Apache1 (192.168.16.23:8081) +---> Squid2 (192.168.17.12:8000) ---> Apache2 (192.168.16.23:8082) +---> Squid3 (192.168.17.13:8000) ---> Apache3 (192.168.16.23:8083) +---> Squid4 (192.168.17.14:8000) ---> Apache4 (192.168.16.23:8084) Should I put the apaches on different IPs? I have in squid1.conf: http_port 192.168.17.11:8000 accel defaultsite=www.example.com act-as-origin vhost http11 cache_peer 192.168.16.23 parent 8081 0 no-query originserver no-digest no-netdb-exchange max-conn=256 front-end-https=auto http11 name=apache cache_peer 192.168.17.12 sibling 8000 3130 proxy-only name=squid2 cache_peer 192.168.17.13 sibling 8000 3130 proxy-only name=squid3 cache_peer 192.168.17.14 sibling 8000 3130 proxy-only name=squid4 icp_access allow from_localnetC icp_access deny all cache_peer_access apache allow from_localnetC cache_peer_access apache deny all miss_access allow from_localnetC miss_access deny all hierarchy_stoplist cgi-bin ? I must admit that my tests are pretty "intensive"; I do a loop with random access on random squids, with occasional random purges. So the digests are quickly outdated. In reality, it would take longer. If the digests are outdated, is there a way to fallback to regular ICP communication? here's what I get with a digest miss: I ask squid2 for an object (how come FIRST_UP_PARENT appears first?). squid2 access.log: 1215783827.918 1 192.168.17.12 TCP_MISS/200 7188 GET http://192.168.16.23/img/spain.gif - FIRST_UP_PARENT/apache image/gif 1215783827.919 2 192.168.17.12 TCP_MISS/200 7233 GET http://192.168.16.23/img/spain.gif - CD_SIBLING_HIT/squid3 image/gif squid3 access.log: 1215783827.918 8 192.168.17.13 TCP_MISS/200 7213 GET http://192.168.16.23/img/spain.gif - CD_SIBLING_HIT/squid2 image/gif squid2 cache.log: 2008/07/11 15:43:47| WARNING: Forwarding loop detected for: Client: 192.168.17.12 http_port: 192.168.17.12:8000 GET http://192.168.16.23/img/spain.gif HTTP/1.0 Pragma: no-cache User-Agent: Wget/1.10.2 (Red Hat modified) Accept: */* Host: 192.168.16.23 Via: 1.0 Squid2:8000 (squid), 1.0 Squid3:8000 (squid) X-Forwarded-For: 192.168.17.12, 192.168.17.13 Cache-Control: max-age=864000 Proxy-Connection: keep-alive apache2 access.log: 192.168.28.220 - - [11/Jul/2008:15:43:47 +0200] "GET /img/spain.gif HTTP/1.1" 200 6863 Maybe we will forget about digests, drop proxy-only to limit ICP traffic and have regular sibling communication with duplicated caches... Thx, JD