Search squid archive

Re: RE: Squid CPU 100% infinite loop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Have you looked at garbage collection as a possible source of the problem?
If you really have a 300 GB cache, that might take a long time to process during GC.

You might want to post your GC settings to see if anyone has a suggestion or can eliminate GC as the source of your problem.

The fact that everything works before, during and after except that your CPU is too busy to talk to you, does make one suspect that it is a normal occurrence as far as Squid knows and not worth logging or noting for humans to read.


Ron

On 29/05/2013 1:48 PM, Mike Mitchell wrote:
I've hit something similar.  I have four identically configured systems with 16K squid FD limit, 24 GB RAM, 300 GB cache directory.  I've seen the same failure randomly on all four systems.  During the day the squid process handles > 100 requests/second, with a peak FD usage around 8K FDs.  In the evenings the load drops to about 20 requests/second, with an FD usage around 1K FDs.  CPU usage hovers less than 10% during this time.
Randomly one of the four systems will start increasing its CPU usage.  It takes about 4 hours to go from less than 10% to 100%.  During the four hours the FD usage stays at 1K and the request rate stays right around 20 requests/second.  Once the CPU reaches 100% the squid service stops responding.  About 20 minutes later it starts responding again with CPU levels back down below 10%.  There is nothing in the cache log to indicate a problem.  The squid process did not core dump, nor did the parent restart a child.

I have not seen the problem during the day, only after the load drops.  The hangs do not coincide with the scheduled log rotates.  The one last night recovered a half-hour before the log rotated at 2:00 AM.

Every one of my hangs have been proceeded with a rise in CPU usage, and squid recovers on its own without logging anything.

I have a script that does
   GET cache_object://localhost/info
   GET cache_object://localhost/counters
every five minutes and puts the interesting (to me) bits into RRD files.
Obviously the script fails during the 20 minutes the squid process is non-responsive.

________________________________________
From: Stuart Henderson [stu@xxxxxxxxxxxxxxx]
Sent: Tuesday, May 28, 2013 12:01 PM
To: squid-users@xxxxxxxxxxxxxxx
Subject: Re: Squid CPU 100% infinite loop

On 2013-05-17, Alex Rousskov <rousskov@xxxxxxxxxxxxxxxxxxxxxxx> wrote:
On 05/17/2013 01:28 PM, Loïc BLOT wrote:

I have found the problem. In fact it's the problem mentionned on my
last mail, is right. Squid FD limit was reached, but squid doesn't
mentionned every time the freeze appear that it's a FD limit
problem, then the debug was so difficult.
Squid should warn when it runs out of FDs. If it does not, it is a
bug. If you can reproduce this, please open a bug report in bugzilla
and post relevant logs there.

FWIW, I cannot confirm or deny whether reaching FD limit causes what
you call an infinite loop -- there was not enough information in your
emails to do that. However, if reaching FD limit causes high CPU
usage, it is a [minor] bug.
I've just hit this one, ktrace shows that it's in a tight loop doing
sched_yield(), I'll try and reproduce on a non-production system and open
a ticket if I get more details..







--
Ron Wheeler
President
Artifact Software Inc
email: rwheeler@xxxxxxxxxxxxxxxxxxxxx
skype: ronaldmwheeler
phone: 866-970-2435, ext 102





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux