Search squid archive

RE: Overflowing filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: Michael Puckett [mailto:Michael.Puckett@xxxxxxx]
> Sent: Wednesday, November 23, 2005 6:26 PM
> To: squid-users
> Subject: Re:  Overflowing filesystems
> 
> 
> Sorry if you see this again, I got a bounced mail from
> squid-cache.org
> 

The mailing list doesn't allow HTML mail.

> Chris Robertson wrote:
> 
>>>>> -----Original Message----- From: Michael Puckett
>>>>> [mailto:Michael.Puckett@xxxxxxx] Sent: Wednesday, November
>>>>> 23, 2005 9:25 AM To: squid-users Subject: 
>>>>> Overflowing filesystems
>>>>> 
>>>>> I am running this version of squid:
>>>>> 
>>>>> Squid Cache: Version 2.5.STABLE10 configure options:
>>>>> --enable-large-cache-files --disable-internal-dns
>>>>> --prefix=/opt/squid --enable-async-io --with-pthreads
>>>>> --with-aio --enable-icmp --enable-snmp
>>>>> 
>>> 
>>> I imagine you have some reason for disabling the internal 
>>> DNS resolution.  I'm a bit curious as to what it would be...
>>> 
>> 
> That is the way our admin set it up. This particular application is
> an internal (to the company) only caching system which (relatively)
> few users move (relatively) few VERY large, multi GB files from 
> (relatively) few origins to (relatively) few destinations. We are not
>  caching web pages.
> 

Fair enough.

>> 
>>>>> specifically enabled for large files. My cache_dir is 535GB and the
>>>>> cache_dir directive looks like this:
>>>>> 
>>>>> cache_dir aufs /export/vol01/cache 400000 64 64 
>>>>> cache_swap_low 97 cache_swap_high 99
>>>>> 
>>> 
>>> Aside from the unusually low number of directories for the 
>>> amount of data, that all seems fine.
>>> 

Obviously if all it's caching are really big files, you don't need many directories.

>>>>> Squid has consumed the entire partition:
>>>>> 
>>>>> /dev/dsk/c1t1d0s7      537G   529G   2.2G   100%
>>>>> /export/vol01
>>>>> 
>>>>> Not the 400GB expected in the cache_dir directive and is
>>>>> now giving write failures.
>>>>> 
>>>>> Have I set something up wrong? Why has the cache_dir size 
>>>>> directive been ignored and why isn't old cached content being
>>>>> released?
>>>>> 
>>> 
>>> Is Squid the only thing writing to this cache_dir?  Is
>>> there only one instance of Squid running?  Do you see a process like
>>> unlinkd running?  Are there any errors in the cache_log?  What OS are
>>> you running?  Assuming (judging from your email address) it's
>>> Solaris, have you had a gander at the FAQ
>>> (http://www.squid-cache.org/Doc/FAQ/FAQ-14.html#ss14.1)?
>>> 
>> 
> Good call on the OS  :)  Yes, we are running a multiprocessor Solaris
> 10 system. There are no errors on the cache log other than the 
> filesystem write failures as the filesystem fills up. The server is
> entirely dedicated to Squid as a cache server, the filesystem
> entirely dedicated to the cache.
> 
> PS output shows: 0 S    squid 20127 20121   0  40 20        ?    153
> ? Jul 15 ?           0:00 (unlinkd)
> 
> with no runtime thus far. Yes, we have had a gander at the FAQ and
> have been running squid internally for a number of years now. This is
> the first time we have filled up so large a filesystem while running
> the largefile squid version however.
> 
> -mikep
> 

Huh...  Well, I have no experience with either acceleration setups or the enable-large-cache-files compilation option, but I would have to advise enabling the cache_store_log and see if it gives any indication of what is going on here (clear the cache manually to get a fresh start).  Just how big are the files you are caching?  Would it be possible for the cache to be at 395.6GB (99% of 400) and a new object is requested that fills the partition (a staggering 140GB file)?  Not sure how Squid would handle that.  Then again, I find the possibility of a single 140GB file to be...  Unlikely.

Chris


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux