Search squid archive

Re: Too many open files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 25/07/2013 8:11 a.m., Peter Retief wrote:
Hi

I am struggling with the following error: comm_open: socket failure: (24)
Too many open files

This happens after squid has been running for many hours.  I have a Xeon
server with 12 cores, 64Gb Ram and 8 x 1Tb disks.  The first two are in a
RAID-1, and the balance are managed as aufs caches.

The system is running 64-bit Ubuntu 12.04 and squid 3.3.6 compiled from
source.

I am running transparent proxy from two Cisco 7600 routers using wccp2.  The
purpose is to proxy international bandwidth ( 3 x 155Mbps links).

To handle the load I have 6 workers, each allocated its own physical disk
(noatime).

I have set "ulimit -Sn 16384" and "ulimit -Hn 16384", by setting
/etc/security/limits.conf as follows:

#       - Increase file descriptor limits for Squid
*               soft    nofile          16384
*               hard    nofile          16384

The squid is set to run as user "squid".  If I login as root, then "su
squid", the ulimits are set correctly.  For root, however, the ulimits keep
reverting to 1024.

squidclient mgr:info gives:

          Maximum number of file descriptors:   98304
         Largest file desc currently in use:   18824
          Number of file desc currently in use: 1974

That biggest-FD value is too high for workers that only have 16K available each. I've just fixed the calculation there (was adding together the values for each biggest-FD instead of comparing with max())


Note that if one of the workers is reaching the limit of available FD, then you will get that message from that worker while the others run fine with less FD consumed. Can you display the entire and exact cache.log line which that error message is contained in please?

Amos




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux