Hi Carlos,
Please note, that client's requests also spend file descriptors. Use
netstat to find the exact number.
If you use ubuntu you could be interested in this thread too:
http://www.squid-cache.org/mail-archive/squid-users/201212/0276.html
Best wishes,
Pavel
On 08/20/2013 09:57 PM, Carlos Defoe wrote:
Hello,
Look at this:
2013/08/20 07:55:26 kid1| ctx: exit level 0
2013/08/20 07:55:26 kid1| Attempt to open socket for EUI retrieval
failed: (24) Too many open files
2013/08/20 07:55:26 kid1| comm_open: socket failure: (24) Too many open files
2013/08/20 07:55:26 kid1| Reserved FD adjusted from 100 to 64542 due to failures
2013/08/20 07:55:26 kid1| WARNING! Your cache is running out of filedescriptors
2013/08/20 07:55:26 kid1| comm_open: socket failure: (24) Too many open files
ulimit -n = 65535 (i have configured it in limits.conf myself)
When squid starts, it shows no errors:
2013/08/20 13:38:11 kid1| Starting Squid Cache version 3.3.8 for
x86_64-unknown-linux-gnu...
2013/08/20 13:38:11 kid1| Process ID 8087
2013/08/20 13:38:11 kid1| Process Roles: worker
2013/08/20 13:38:11 kid1| With 65535 file descriptors available
running lsof gives no more than 8000 files opened when the problem occurs.
Why should it say "Too many open files"? Do you think SELinux can be
the cause of this issue?
thanks