Search squid archive

Re: Squid and CPU 100%

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/07/2017 04:42 AM, Vieri wrote:
> So I'm worried that 32768 may not be enough.
> Is this weird, or should I really increase this value?

Think about the underlying physics of what you are observing. It may
help reduce guessing and guide you towards a solution:

You can estimate the reasonable number of file descriptors using
expected maximum request rate and mean response time. Add, say 30%, to
account for long-lived persistent connections. Remember that Squid uses
one descriptor for the from-client connection and one descriptor for the
to-server connection. If that estimate is way below 32K, then the
current limit itself is not the problem. Otherwise, it probably is (and
then you probably need more Squid workers not more descriptors per worker).

* It is possible, perhaps even likely, that some some unknown problem
suddenly consumes almost all CPU cycles, drastically slowing Squid down,
and quickly consuming all file descriptors (because accepted connections
are not served fast enough).

* It is also possible, albeit less likely, that some unknown problem
slows Squid down over time and slowly leads to excessive file descriptor
use and even 100% CPU usage.

To distinguish the two cases, consider studying transaction response
times and the total number of connections logged every minute of every
hour. You should collect such stats for many other reasons anyway!

Alex.
P.S. I trust you have already checked all system logs for more clues and
found nothing of interest there.
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux