Search squid archive

RE: Too many open files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Amos:
> Squid starts with 16K of which 100 are reserved FD. When it changes that
the 16K limit is still the total, but the reserved is raised to make N
sockets reserved/unavailable.
> So 16384 - 15394 = 990 FD safe to use after adjustments caused by the
error.

>> Peter:
>>    I would have deduced that
>> there was some internal limit of 100 (not 1000) FD's, and that squid 
>> was re-adjusting to the maximum currently allowed (16K)?

> Amos:
> Yes, that is correct. However it is the "reserved" limit being raised.
> Reserved is the number of FD which are configured as available but
determined to be unusable. For example this can be though of as the cordon
on a danger zone for FD - if Squid strays into using
> those number of sockets again it can expect errors. Raising that count
reduces Squid operational FD resources by the amount raised.
> Squid may still try to use some of them under peak load conditions, but
will do so only if there is no other way to free up the safe in-use FD.
> Due to that case for emergency usage, when Squid sets the reserved limit
it does not set it exactly on the FD number which got error'd. It sets is
2-3% into the "safe" FD count. So rounding 990 up that > slight amount we
get the 1024 which is a highly suspicious value.

I have managed to raise the per-process limit from 16K to 64K, and this is
reflected in the mgr:info statistics.  However, if I understand your login
above, this is unlikely to be of benefit - I have to find where Ubuntu is
setting some limit of 1024?  Am I correct?  Is this a socket limit, rather
than a generic file descriptor limit?





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux