Search squid archive

RE: RE: Too Many Open File Descriptors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 10 Aug 2011 08:59:08 +0300, Justin Lawler wrote:
Hi,

Thanks for this. Is this a known issue? Is there any bugs/articles on
this? Just we would need something more concrete to go to the customer
with on this issue - more of a background on this issue would be very
helpful.


http://onlamp.com/pub/a/onlamp/2004/02/12/squid.html

Each ICAP service the client request passes through counts as an FD consuming external helper.


Is 2048 FD's enough? Is there any connection leaks? Does squid ignore
this 2048 value?

The fact that you are here asking that question is proof that no, its not (for you).


The OS has FD limits as below - so would have thought current
configuration should be ok?
set rlim_fd_max=65536
set rlim_fd_cur=8192

Only if squid is not configured with a lower number. As appears to be the case.
As proof, the manager report from inside squid:
 "Maximum number of file descriptors:   2048"

Squid could have been built with an absolute 2048 limit hard coded by the configure options. Squid could have been started by an init script which lowered the available from the OS default to 2048.

You say its 3.0, which does not support configurable FD limits in squid.conf. So that alternative is out.

Amos



Thanks,
Justin


-----Original Message-----
From: Amos Jeffries [mailto:squid3@xxxxxxxxxxxxx]
Sent: Wednesday, August 10, 2011 11:47 AM
To: squid-users@xxxxxxxxxxxxxxx
Subject: Re:  RE: Too Many Open File Descriptors

 On Tue, 09 Aug 2011 23:07:05 -0400, Wilson Hernandez wrote:
That used to happen to us and had to write a script to start squid
like this:

#!/bin/sh -e
#

echo "Starting squid..."

ulimit -HSn 65536
sleep 1
/usr/local/squid/sbin/squid

echo "Done......"



 Pretty much the only solution.

 ICAP raises the potential worst-case socket consumption per client
 request from 3 FD to 7. REQMOD also doubles the minimum resource
 consumption from 1 FD to 2.

 Amos


On 8/9/2011 10:47 PM, Justin Lawler wrote:
Hi,

We have two instances of squid (3.0.15) running on a solaris box.
Every so often (like many once every month) we get a load of below
errors:

"2011/08/09 19:22:10| comm_open: socket failure: (24) Too many open
files"

Sometimes it goes away of its own, sometimes squid crashes and
restarts.

When it happens, generally happens on both instances of squid on the
same box.

We have number open file descriptors set to 2048 - using squidclient
mrg:info:

root@squid01# squidclient mgr:info | grep file
         Maximum number of file descriptors:   2048
         Largest file desc currently in use:   2041
         Number of file desc currently in use: 1903
         Available number of file descriptors:  138
         Reserved number of file descriptors:   100
         Store Disk files open:                  68

We're using squid as an ICAP client. Both squid instances point two
different ICAP servers, so it's unlikely a problem with the ICAP
server.

Is this a known issue? As its going on for a long time (over 40
minutes continuously), it doesn't seem like it's just the traffic
spiking for a long period. Also, we're not seeing it on other boxes,
which are load balanced.

Any pointers much appreciated.

Regards,
Justin
This message and the information contained herein is proprietary and
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



This message and the information contained herein is proprietary and
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux