Hello again, I've managed to replicate the error in a development environment. My setup in dev is 2 squids accelerating a master squid, that is accelerating a webserver. The 2 child squids are behind a loadbalancer. To reproduce the problem, I shutdown the master squid, and generate HTTP load to the child squids via the load balancer, then after about 5 minutes start up the master squid, here is an example of the response after sending a valid query that worked prior to replication test. HTTP Request generated by wget: Connecting to myurl.mydomain.com[172.23.161.100]:80... connected. HTTP request sent, awaiting response... 1 HTTP/1.0 403 Forbidden 2 Server: squid/2.5.STABLE12 3 Mime-Version: 1.0 4 Date: Sun, 23 Apr 2006 22:24:23 GMT 5 Content-Type: text/html 6 Content-Length: 1101 7 Expires: Sun, 23 Apr 2006 22:24:23 GMT 8 X-Squid-Error: ERR_ACCESS_DENIED 0 9 X-Cache: MISS from master.mydomain.net 10 X-Cache: MISS from master.mydomain.net 11 X-Cache: MISS from sibling1.object1.com 12 Connection: close 22:18:40 ERROR 403: Forbidden. Extract from cache.log: 2006/04/23 23:24:23| The request GET http://myurl.mydomain.com:80/myfolder1/ is ALLOWED, because it matched 'all' 2006/04/23 23:24:23| clientAccessCheck: proxy request denied in accel_only mode 2006/04/23 23:24:23| The request GET http://myurl.mydomain.com/myfolder1/ is DENIED, because it matched 'all' 2006/04/23 23:24:23| storeEntryValidLength: 233 bytes too big; '8E293D7F9154EF3C2032A87976FAFCA1' 2006/04/23 23:24:23| clientReadRequest: FD 215: no data to process ((11) Resource temporarily unavailable) 2006/04/23 23:24:23| The reply for GET http://myurl.mydomain.com/myfolder1/ is ALLOWED, because it matched 'all' Access log extract: 10.1.1.3 - - [23/Apr/2006:23:24:23 +0100] "GET http://myurl.mydomain.com/myfolder1/ HTTP/1.0" 403 1401 TCP_DENIED:NONE 10.1.1.3 - - [23/Apr/2006:23:24:23 +0100] "GET http://myurl.mydomain.com/myfolder1/ HTTP/1.0" 403 1427 TCP_MISS:FIRST_UP_PARENT I have managed to remove the forwarding loop error by instructing squid not to accept requests via itself as recommended, but the content error still exists. My config doesn't contain a negative ttl entry, so I assume it is the default 5 minutes. Any ideas? TIA. Mark. On 18/03/06, Henrik Nordstrom <henrik@xxxxxxxxxxxxxxxxxxx> wrote: > lör 2006-03-18 klockan 19:23 +0000 skrev Mark Stevens: > > > I will perform further testing against the redirect rules, however > > what I am finding strange is that the problem only happens after > > downtime, to resolve the problem I used an alternative redirect_rules > > file with the same squid.conf file, and the looping errors go away, > > How your redirector processes it's rules or not is not a Squid > issue/concern. Squid relies on the redirector of your choice to do it's > job. > > Maybe your redirector is relying on some DNS lookups or something else > not yet available at the time you start Squid in the system bootup > procedure? Have seen people bitten by such issues in the past. > > Regards > Henrik > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.2.2 (GNU/Linux) > > iD8DBQBEHIgc516QwDnMM9sRAmx4AJ42AEoQYYVnbfdoZfa5JjygWHwXBwCfUE+u > qAf9owU+M+NMy7XW6ceOw28= > =MeSV > -----END PGP SIGNATURE----- > > >