Search squid archive

Re: Recurrent crashes and warnings: "Your cache is running out of filedescriptors"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I was having this problem in the past and created the following script to start squid:

#!/bin/sh -e
#

echo "Starting squid..."

ulimit -HSn 65536
sleep 1
/usr/local/squid/sbin/squid

echo "Done......"

That fixed the problem and hasn't happen ever since.

Hope that helps.

On 10/11/2011 9:07 AM, Leonardo wrote:
Hi all,

I'm running a transparent Squid proxy on a Linux Debian 5.0.5,
configured as a bridge.  The proxy serves a few thousands of users
daily.  It uses Squirm for URL rewriting, and (since 6 weeks) sarg for
generating reports.  I compiled it from source.
This is the output of squid -v:
Squid Cache: Version 3.1.7
configure options:  '--enable-linux-netfilter' '--enable-wccp'
'--prefix=/usr' '--localstatedir=/var' '--libexecdir=/lib/squid'
'--srcdir=.' '--datadir=/share/squid' '--sysconfdir=/etc/squid'
'CPPFLAGS=-I../libltdl' --with-squid=/root/squid-3.1.7
--enable-ltdl-convenience
I set squid.conf to allocate 10Gb of disk cache:
cache_dir ufs /var/cache 10000 16 256


Everything worked fine for almost one year, but now suddenly I keep
having problems.


Recently Squid crashed and I had to delete swap.state.


Now I keep seeing this warning message on cache.log and on console:
client_side.cc(2977) okToAccept: WARNING! Your cache is running out of
filedescriptors

At OS level, /proc/sys/fs/file-max reports 314446.
squidclient mgr:info reports 1024 as the max number of file descriptors.
I've tried both to set SQUID_MAXFD=4096 on etc/default/squid and
max_filedescriptors 4096 on squid.conf but neither was successful.  Do
I really have to recompile Squid to increase the max number of FDs?


Today Squid crashed again, and when I tried to relaunch it it gave this output:

2011/10/11 11:18:29| Process ID 28264
2011/10/11 11:18:29| With 1024 file descriptors available
2011/10/11 11:18:29| Initializing IP Cache...
2011/10/11 11:18:29| DNS Socket created at [::], FD 5
2011/10/11 11:18:29| DNS Socket created at 0.0.0.0, FD 6
(...)
2011/10/11 11:18:29| helperOpenServers: Starting 40/40 'squirm' processes
2011/10/11 11:18:39| Unlinkd pipe opened on FD 91
2011/10/11 11:18:39| Store logging disabled
2011/10/11 11:18:39| Swap maxSize 10240000 + 262144 KB, estimated 807857 objects
2011/10/11 11:18:39| Target number of buckets: 40392
2011/10/11 11:18:39| Using 65536 Store buckets
2011/10/11 11:18:39| Max Mem  size: 262144 KB
2011/10/11 11:18:39| Max Swap size: 10240000 KB
2011/10/11 11:18:39| /var/cache/swap.state.new: (28) No space left on device
FATAL: storeDirOpenTmpSwapLog: Failed to open swap log.

I therefore deactivated the cache and rerun Squid.  It showed a long
list of errors of this type:
IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 10: (2) No such file or
directory
and then started.  Now Squid is running and serving requests, albeit
without caching.  However, I keep seeing the same error:
client_side.cc(2977) okToAccept: WARNING! Your cache is running out of
filedescriptors

What is the reason of this since I'm not using caching at all?


Thanks a lot if you can shed some light on this.
Best regards,


Leonardo


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux