Search squid archive

RE: Increasing File Descriptors - Fixed!!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I may try that, thanks

steve

-----Original Message-----
From: Jose Ildefonso Camargo Tolosa [mailto:ildefonso.camargo@xxxxxxxxx] 
Sent: Monday, May 10, 2010 5:39 PM
To: Bradley, Stephen W. Mr.
Cc: mnhassan@xxxxxxx; Squid Users
Subject: Re:  Increasing File Descriptors - Fixed!!

Hi!

On Mon, May 10, 2010 at 8:44 AM, Bradley, Stephen W. Mr.
<bradlesw@xxxxxxxxxx> wrote:
> Never thought of that, thanks.
>
> We use one IP address for each of the Squid servers and I gather from what you are saying that is also going to be a problem?

It depends on the amount of simultaneous outgoing connections on the
proxy, you need to evaluate that.  I hit the limit once, it was on a
1000 computers network (used by college students, mainly), I had to
increase the port-range to allow 40k connections, and it went stable
on around 20k with peaks of ~35k :-S (that lasted a couple of minutes,
and occurred at least three times a day).

It was interesting.

>
> steve
>
> -----Original Message-----
> From: Jose Ildefonso Camargo Tolosa [mailto:ildefonso.camargo@xxxxxxxxx]
> Sent: Saturday, May 08, 2010 3:17 PM
> To: Bradley, Stephen W. Mr.
> Cc: mnhassan@xxxxxxx; Squid Users
> Subject: Re:  Increasing File Descriptors - Fixed!!
>
> Hi!
>
> Just one thought here:
>
> I believe there is a limit on the number of connections that can be
> originated from a single IP (IPv4), so, I guess that you have
> *several* external IPs and that you make squid use many of them, look
> that file on your system:
>
> cat /proc/sys/net/ipv4/ip_local_port_range
>
> in my PC it is: 32768 to 61000, thus giving me a max of 28232 outgoing
> connections per IP, that's usually "enough", but your case isn't a
> "usual" one.
>
> I hope this helps,
>
> Ildefonso Camargo
>
> On Fri, May 7, 2010 at 8:32 AM, Bradley, Stephen W. Mr.
> <bradlesw@xxxxxxxxxx> wrote:
>> Got it resolved!
>>
>> cat /proc/sys/fs/file-max showed that I could go as high as 3,138,830 FDs.
>>
>> I changed the compile options to --with-maxfd=128000 and recompiled and installed it.
>>
>> I changed the line in my /etc/init.d/squid script to ulimit -HSn 128000 and restarted.
>>
>> I thought I had tried all this before but evidently not.
>>
>> If it almost held the load at 32,768 then at 128,000 I should have enough head room to keep us safe, for now.
>>
>>
>> Thanks to all who responded.
>>
>> steve
>>
>>
>> -----Original Message-----
>> From: Nyamul Hassan [mailto:mnhassan@xxxxxxx]
>> Sent: Thursday, May 06, 2010 4:15 PM
>> To: Squid Users
>> Subject: Re:  Increasing File Descriptors
>>
>> He needs more FDs because this single box is handling 5000 users over
>> a 400mbps connection.  We run around 2,000 users on generic hardware,
>> and have seen FDs as high as 20k.
>>
>> We use CentOS 5 and the following guide is a good place to increase
>> the FD limit:
>> http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/
>> The command "cat /proc/sys/fs/file-max" shows how many maximum FDs
>> your OS can handle.
>>
>> After you've made sure that your OS is doing your desired FD limit,
>> please re-run Squid.  Squid shows how many FDs it is configured for in
>> its "General Runtime Information" (mgr:info in cli) from the CacheMgr
>> interface.  If this still shows lower than the OS limit you just saw
>> earlier, then you might need to recompile Squid with the
>> '--with-maxfd=<your-desired-fdmax>' flag set during "./configure"
>>
>> As a side note, if you are using Squid as a forward proxy, you might
>> have better results with Squid 2.7x.
>>
>> Regards
>> HASSAN
>>
>>
>> On Fri, May 7, 2010 at 00:53, George Herbert <george.herbert@xxxxxxxxx> wrote:
>>>
>>> Do this:
>>>
>>> ulimit -Hn
>>>
>>> If the values is 32768 that's your current kernel/sys max value and
>>> you're stuck.
>>>
>>> If it's more than 32768 (and my RHEL 5.3 box says 65536) then you
>>> should be able to increase up to that value.  Unless there's an
>>> internal signed 16-bit int involved in FD tracking inside the Squid
>>> code then something curious is happening...
>>>
>>> However - I'm curious as to why you'd need that many.  I've had top
>>> end systems with Squid clusters running with compiles of 16k file
>>> descriptors and only ever really used 4-5k.  What are you doing that
>>> you need more than 32k?
>>>
>>>
>>> -george
>>>
>>> On Thu, May 6, 2010 at 10:32 AM, Bradley, Stephen W. Mr.
>>> <bradlesw@xxxxxxxxxx> wrote:
>>> > Unfortunately won't work for me above 32768.
>>> >
>>> > I have the ulimit in the startup script and that works okay but I need more the 32768.
>>> >
>>> > :-(
>>> >
>>> >
>>> >
>>> > -----Original Message-----
>>> > From: Ivan . [mailto:ivanhec@xxxxxxxxx]
>>> > Sent: Thursday, May 06, 2010 5:17 AM
>>> > To: Bradley, Stephen W. Mr.
>>> > Cc: squid-users@xxxxxxxxxxxxxxx
>>> > Subject: Re:  Increasing File Descriptors
>>> >
>>> > worked for me
>>> >
>>> > http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/
>>> >
>>> > no recompile necessary
>>> >
>>> >
>>> > On Thu, May 6, 2010 at 7:13 PM, Bradley, Stephen W. Mr.
>>> > <bradlesw@xxxxxxxxxx> wrote:
>>> >> I can't seem to get increase the number above 32768 no matter what I do.
>>> >>
>>> >> Ulimit during compile, sysctl.conf and everything else but no luck.
>>> >>
>>> >>
>>> >> I have about 5,000 users on a 400mbit connection.
>>> >>
>>> >> Steve
>>> >>
>>> >> RHEL5 64bit with Squid 3.1.1
>>> >
>>>
>>>
>>>
>>> --
>>> -george william herbert
>>> george.herbert@xxxxxxxxx
>>>
>>
>


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux