Search squid archive

Re: Could this be a potential problem? Squid stops working and requires restart to work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Amos Jeffries wrote:
On Mon, 07 Dec 2009 14:47:22 -0900, Chris Robertson <crobertson@xxxxxxx>
wrote:
Asim Ahmed @ Folio3 wrote:
Hi,

I found this in cache.log when i restarted squid after a halt!

CPU Usage: 79.074 seconds = 48.851 user + 30.223 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
       total space in arena:    7452 KB
       Ordinary blocks:         7363 KB    285 blks
       Small blocks:               0 KB      1 blks
       Holding blocks:         14752 KB     94 blks
       Free Small blocks:          0 KB
       Free Ordinary blocks:      88 KB
       Total in use:           22115 KB 297%
       Total free:                88 KB 1%
This is not likely the source of your trouble...

http://www.squid-cache.org/mail-archive/squid-users/200904/0535.html

Chris

That would be right if they were negatives or enough to wrap 32-bit back
to positive.

Since its only ~300% I'm more inclined to think it's a weird issue with
the squid memory cache objects.

The bug of this week seems to be a few people now seeing multiple-100%
memory usage in Squid on FreeBSD 7+ 64-bit OS. Due to Squid memory-cache
objects being very slightly larger than the malloc page size. Causing 2x
pages per node instead of just one. And our use of fork() allocating N time
the virtual-memory which mallinfo might report.

Asim Ahmed: does that match your OS?


Amos

Asim Ahmed @ Folio3 wrote:
> I am using Red Hat Enterprise Linux Server release 5.3 (Tikanga) with
> shorewall 4.4.4-2 and Squid 3.0 STABLE20-1. My problem is kind of wierd.
> Squid stops working after like a day and i need to restart it to let
> user browse or use internet. Any parameters to look for? Out of 2 GB RAM
> only 200 MB RAM is left free when i find squid halted (before restrting it).

Look out for the cache_mem, and 10% of the cache_dir size worth of RAM is also gobbled by Squid for indexing the cache_dir contents. Then a pile of other RAM for other transactions.

If you can please grab as much info about the memory situation on the box as possible when Squid hangs up like this. It might be helpful in tracking down whatever is happening.

An strace of Squid child worker process to find out exactly what Squid is processing can be very useful.

>
> One more question i have is: In just two days my squid cache has grown
> to 500 MB. I've set cache_dir as 10GB ... I believe it will not take
> long before it will reach this limit! what happens then? does it start
> discarding old cache objects or what?
>

Yes, it starts discarding objects that are too old or unused for long periods. To make as much space as needed for the new incoming ones. You will see it reach a peak and level off just under the 10GB mark, (leaving space in case you get a sudden burst of large cacheable items).

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.15

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux