Hello, just started squid 3.0 (stable2) in production. Three web servers, each has squid in front, all three squids are set as siblings to each other. First of, I'm really surprised to get TCP_NEGATIVE_HIT/200. Lots of them: # wc -l squid_access.log 1482641 squid_access.log # grep -c 'TCP_NEGATIVE_HIT' squid_access.log 118348 # grep -c 'TCP_NEGATIVE_HIT/200' squid_access.log 118003 so 10% of all requests are negative hits Here are "storeCheckCachable() Stats": no.not_entry_cachable 736 no.wrong_content_length 1 no.negative_cached 379766 no.too_big 0 no.too_small 0 no.private_key 0 no.too_many_open_files 0 no.too_many_open_fds 0 yes.default 144468 Interesting enough, these are mostly images and js/css objects. Then I though it could be related to: acl really_static urlpath_regex -i \.(jpg|jpeg|gif|png|tiff|tif|svg|swf|ico|css|js)$ acl nocache_cookie req_header Cookie NOCACHE\=1 cache allow really_static cache deny nocache_cookie And indeed - if I come with Cookie: NOCACHE=1, for objects matching really_static acl I get one of: TCP_NEGATIVE_HIT/200 CD_SIBLING_HIT/192.168.10.162 TCP_IMS_HIT/304 So all local hits are named TCP_NEGATIVE_HIT if they match cache allow. Though I'm not sure if this is a feature or a bug. The next question is about Vary header. I get absolutely amazing amount of these errors in cache.log: 2008/03/14 10:46:54| clientProcessHit: Vary object loop! 2008/03/14 10:46:54| varyEvaluateMatch: Oops. Not a Vary match on second attempt, 'http://some.url' 'accept-encoding' 2008/03/14 10:46:55| clientProcessHit: Vary object loop! 2008/03/14 10:46:55| varyEvaluateMatch: Oops. Not a Vary match on second attempt, 'http://some.other.url' 'accept-encoding="gzip,%20deflate"' A rough number: # grep -c 'Vary object loop' cache.log && wc -l squid_access.log 244816 1842602 squid_access.log Any idea what kind of loop that is and how to avoid it? Thanks! Aurimas