Search squid archive

Re: Overflowing filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry if you see this again, I got a bounced mail from squid-cache.org

Chris Robertson wrote:

-----Original Message-----
From: Michael Puckett [mailto:Michael.Puckett@xxxxxxx]
Sent: Wednesday, November 23, 2005 9:25 AM
To: squid-users
Subject:  Overflowing filesystems


I am running this version of squid:

Squid Cache: Version 2.5.STABLE10
configure options: --enable-large-cache-files --disable-internal-dns --prefix=/opt/squid --enable-async-io --with-pthreads --with-aio --enable-icmp --enable-snmp

I imagine you have some reason for disabling the internal DNS resolution.  I'm a bit curious as to what it would be...
That is the way our admin set it up. This particular application is an internal (to the company) only caching system which (relatively) few users move (relatively) few VERY large, multi GB files from (relatively) few origins to (relatively) few destinations. We are not caching web pages.


specifically enabled for large files. My cache_dir is 535GB and the cache_dir directive looks like this:

cache_dir aufs /export/vol01/cache 400000 64 64
cache_swap_low 97
cache_swap_high 99


Aside from the unusually low number of directories for the amount of data, that all seems fine.

Squid has consumed the entire partition:

/dev/dsk/c1t1d0s7      537G   529G   2.2G   100%    /export/vol01

Not the 400GB expected in the cache_dir directive and is now giving write failures.

Have I set something up wrong? Why has the cache_dir size directive been ignored and why isn't old cached content being released?


Is Squid the only thing writing to this cache_dir?  Is there only one instance of Squid running?  Do you see a process like unlinkd running?  Are there any errors in the cache_log?  What OS are you running?  Assuming (judging from your email address) it's Solaris, have you had a gander at the FAQ (http://www.squid-cache.org/Doc/FAQ/FAQ-14.html#ss14.1)?
Good call on the OS :) Yes, we are running a multiprocessor Solaris 10 system. There are no errors on the cache log other than the filesystem write failures as the filesystem fills up. The server is entirely dedicated to Squid as a cache server, the filesystem entirely dedicated to the cache.

PS output shows:
0 S squid 20127 20121 0 40 20 ? 153 ? Jul 15 ? 0:00 (unlinkd)

with no runtime thus far. Yes, we have had a gander at the FAQ and have been running squid internally for a number of years now. This is the first time we have filled up so large a filesystem while running the largefile squid version however.

-mikep



begin:vcard
fn:<b>Michael Puckett</b>
n:Puckett;Michael
org:Sun Microsystems Inc;Software Integration Engineering
adr;quoted-printable:;;M/S NWK11-106<br>=0D=0A=
	Sun Microsystems Inc<br>=0D=0A=
	901 San Antonio Road;Palo Alto;CA;94303-4900;USA
email;internet:Michael.Puckett@xxxxxxx
tel;work:(510) 315-4777 x31162
x-mozilla-html:TRUE
version:2.1
end:vcard


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux