Your cache_dir is set to 20000, this means you can only cache 20GB worth of items...and assuming you're using default cache_swap_low value (90), Squid will start removing old/stale items from the cache once you hit 90% of that 20GB (18GB more or less). Recommendation for caching more items? Use a larger cache_dir setting (assuming you have the space to use), and set your cache_swap_low/high values to something higher, like 96/98. Also, make sure that this content is cacheable for long periods of time (assuming static content)...just because an object is in the cache doesn't mean it will be served from cache! Check return headers from the origin for expire time/max-age/cache-control (you can use this http://www.ircache.net/cgi-bin/cacheability.py), and finally take look at your refresh patterns, which will apply to content without cache controlling headers. However, I don't think you will ever get all 10TB of content cached by Squid...unless: 1) your squid server has the 10TB necessary to cache everything, 2) your content can be cached essentially forever, and 3) everything has been requested at least once in order to get Squid's cache fully populated. Instead, you probably want to approach caching as a means to save bandwidth costs on frequently requested content, which means what you're doing right now is fine - your cache is fully populated and Squid is continuing to do its job. Just make sure you're caching optimally: profile the hit rate, aim for around 80% or more, depending on request patterns. HTH -----Original Message----- From: Jamie Plenderleith [mailto:jamie@xxxxxxxxxxxx] Sent: Monday, January 26, 2009 10:53 AM To: squid-users@xxxxxxxxxxxxxxx Subject: refresh_pattern to (nearly) never delete cached files in a http accelerator scenario? Hi All, I am using Squid as a HTTP Accelerator/reverse proxy. It is being used to cache the contents of a site that is being served up from a 1Mbps internet connection, but the proxy itself is hosted in Rackspace in the US. Users visit the squid server, and if the item isn't there then it's retrieved from our offices over the 1Mbps upstream. I started running wget on another machine on the web to cache the contents of the site, and the cache on the proxy was growing and growing - but only to a certain point and then seemed to stop at about 170,000 files. Below is the configuration that we've been using: http_port 80 accel defaultsite=[our office's static IP] cache_peer [our office's static IP] parent 80 0 no-query originserver name=myAccel cache_dir ufs c:/squid/var/cache 20000 16 256 acl our_sites dstdomain [our office's static IP] acl all src 0.0.0.0/0.0.0.0 http_access allow our_sites cache_peer_access myAccel allow our_sites cache_peer_access myAccel deny all visible_hostname [hostname of proxy server] cache_mem 1 GB maximum_object_size 20000 KB maximum_object_size_in_memory 1000 KB We tried some variations of the refresh_pattern configuration option, but our cache doesn't seem to grow beyond its current size. There is about 10TB worth of data to cache, and the cache isn't going past 17.3GB in size. I was logging the growth of the cache folder and you can see around 21/01/09 - 22/01/09 that while it was getting bigger it then started getting smaller. 15:21 19/01/09 1.65/1.99 (126,276 files) 23:22 19/01/09 2.99/3.35 (134,820 files) 01:23 20/01/09 3.73/4.10 (139,767 files) 02:33 20/01/09 4.17/4.54 (142,415 files) 11:17 20/01/09 7.42/7.82 (162,009 files) 12:37 20/01/09 7.92/8.33 (164,794 files) 13:08 20/01/09 8.10/8.52 (165,993 files) 19:42 20/01/09 9.39/9.82 (175,192 files) 23:17 20/01/09 10.0/10.5 (179,588 files) 01:38 21/01/09 10.5/10.9 (182,303 files) 02:24 21/01/09 10.6/11.1 (183,209 files) 12:14 21/01/09 12.5/13.0 (193,659 files) 17:54 21/01/09 13.8/14.2 (200,816 files) 03:14 22/01/09 15.6/16.1 (212,081 files) 16:54 22/01/09 17.2/17.5 (155,725 files) 22:48 22/01/09 17.3/17.6 (107,216 files) 17:07 23/01/09 17.4/17.6 (107,246 files) 14:49 25/01/09 17.3/17.6 (107,287 files) 18:48 26/01/09 17.3/17.5 (103,780 files) Any recommendations on how to ensure the proxy doesn't remove anything from cache? Regards, Jamie