On Tue, 11 Oct 2011 15:07:16 +0200, Leonardo wrote:
Hi all,
I'm running a transparent Squid proxy on a Linux Debian 5.0.5,
configured as a bridge. The proxy serves a few thousands of users
daily. It uses Squirm for URL rewriting, and (since 6 weeks) sarg
for
generating reports. I compiled it from source.
This is the output of squid -v:
Squid Cache: Version 3.1.7
configure options: '--enable-linux-netfilter' '--enable-wccp'
'--prefix=/usr' '--localstatedir=/var' '--libexecdir=/lib/squid'
'--srcdir=.' '--datadir=/share/squid' '--sysconfdir=/etc/squid'
'CPPFLAGS=-I../libltdl' --with-squid=/root/squid-3.1.7
--enable-ltdl-convenience
I set squid.conf to allocate 10Gb of disk cache:
cache_dir ufs /var/cache 10000 16 256
Please try 3.1.15. There were some FD problems solved since .7
Now I keep seeing this warning message on cache.log and on console:
client_side.cc(2977) okToAccept: WARNING! Your cache is running out
of
filedescriptors
At OS level, /proc/sys/fs/file-max reports 314446.
squidclient mgr:info reports 1024 as the max number of file
descriptors.
I've tried both to set SQUID_MAXFD=4096 on etc/default/squid and
max_filedescriptors 4096 on squid.conf but neither was successful.
Do
I really have to recompile Squid to increase the max number of FDs?
You need to run ulimit to raise the per-process limit before starting
Squid.
Squid runs with the lower of this set of limits:
OS /proc
ulimit -n
./configure --with-max-filedscriptors=N (default 1024)
squid.conf max_filedescriptors (default 0, 'unlimited')
Today Squid crashed again, and when I tried to relaunch it it gave
this output:
2011/10/11 11:18:29| Process ID 28264
2011/10/11 11:18:29| With 1024 file descriptors available
2011/10/11 11:18:29| Initializing IP Cache...
2011/10/11 11:18:29| DNS Socket created at [::], FD 5
2011/10/11 11:18:29| DNS Socket created at 0.0.0.0, FD 6
(...)
2011/10/11 11:18:29| helperOpenServers: Starting 40/40 'squirm'
processes
2011/10/11 11:18:39| Unlinkd pipe opened on FD 91
2011/10/11 11:18:39| Store logging disabled
2011/10/11 11:18:39| Swap maxSize 10240000 + 262144 KB, estimated
807857 objects
2011/10/11 11:18:39| Target number of buckets: 40392
2011/10/11 11:18:39| Using 65536 Store buckets
2011/10/11 11:18:39| Max Mem size: 262144 KB
2011/10/11 11:18:39| Max Swap size: 10240000 KB
2011/10/11 11:18:39| /var/cache/swap.state.new: (28) No space left on
device
FATAL: storeDirOpenTmpSwapLog: Failed to open swap log.
So what is taking up all that space?
2GB+ objects in the cache screwing with the actual size calculation?
logs?
swap.state too big?
core dumps?
other applications?
I therefore deactivated the cache and rerun Squid. It showed a long
list of errors of this type:
IpIntercept.cc(137) NetfilterInterception: NF
getsockopt(SO_ORIGINAL_DST) failed on FD 10: (2) No such file or
directory
and then started.
*then* started? This error appears when a client connects. Squid has
to already be started accepting connections for it to occur.
Now Squid is running and serving requests, albeit
without caching. However, I keep seeing the same error:
client_side.cc(2977) okToAccept: WARNING! Your cache is running out
of
filedescriptors
What is the reason of this since I'm not using caching at all?
Cache only uses one FD. Client connection uses one, server connection
uses one. Each helper uses at least one. Your Squid seems to be thinking
it only has 1024 to share between all those connections. Squid can
handle this, but it has to do so by slowing down the incoming traffic a
lot and possibly dropping some client connections.
Amos