> -----Original Message----- > From: Michael Puckett [mailto:Michael.Puckett@xxxxxxx] > Sent: Wednesday, November 23, 2005 9:25 AM > To: squid-users > Subject: Overflowing filesystems > > > I am running this version of squid: > > Squid Cache: Version 2.5.STABLE10 > configure options: --enable-large-cache-files --disable-internal-dns > --prefix=/opt/squid --enable-async-io --with-pthreads --with-aio > --enable-icmp --enable-snmp I imagine you have some reason for disabling the internal DNS resolution. I'm a bit curious as to what it would be... > > specifically enabled for large files. My cache_dir is 535GB and the > cache_dir directive looks like this: > > cache_dir aufs /export/vol01/cache 400000 64 64 > cache_swap_low 97 > cache_swap_high 99 > Aside from the unusually low number of directories for the amount of data, that all seems fine. > Squid has consumed the entire partition: > > /dev/dsk/c1t1d0s7 537G 529G 2.2G 100% /export/vol01 > > Not the 400GB expected in the cache_dir directive and is now giving > write failures. > > Have I set something up wrong? Why has the cache_dir size > directive been > ignored and why isn't old cached content being released? > Is Squid the only thing writing to this cache_dir? Is there only one instance of Squid running? Do you see a process like unlinkd running? Are there any errors in the cache_log? What OS are you running? Assuming (judging from your email address) it's Solaris, have you had a gander at the FAQ (http://www.squid-cache.org/Doc/FAQ/FAQ-14.html#ss14.1)? > -mikep > Chris