Search squid archive

Re: Squid and CPU 100%

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A quick grep at access.log before the issue I reported shows that there were 1350 lines during a full minute. So I understand that would mean there were 1350 requests during that minute even though some of them were denied by squid.conf's policies. So I should estimate less than 2 * 1350. I would use that value anyway, and add 30% to that. I'd end up with 3510.

I grepped access.log for requests during other time ranges around the time of the reported issue, and I've come up with more or less the same values.

So, correct me if I'm wrong, but raising max_filedescriptors (currently at 32768) won't solve the root cause of the problem.

You mention that maybe another unknown process may suddenly consume almost all CPU cycles, drastically slowing Squid down, and quickly consuming all file descriptors.
That could be the case since I'm using c-icap + clamav. The clamd process reaches 100% peaks at times (for a very short while), but I would have to prove that this can be the cause. Wouldn't I need to monitor CPU usage by all processes at all times?

Also, wouldn't it be useful to check "squidclient mgr:filedescriptors" every 10 minutes or so? I have the feeling it's steadily growing over time, even when overall CPU usage is low. So the second less likely theory may also be a candidate.

Under which circumstances would "squidclient mgr:filedescriptors" show ever growing numbers even on very low CPU and network usage (squid seems to be very responsive)?

Vieri
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux