Search squid archive

Re: high memory usage (squid 3.2.0)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/04/2013 7:11 p.m., Marcello Romani wrote:
Il 10/04/2013 17:22, Mr Dash Four ha scritto:


Marcello Romani wrote:
Il 10/04/2013 13:59, Mr Dash Four ha scritto:


Marcello Romani wrote:
Il 09/04/2013 19:33, Mr Dash Four ha scritto:
> [snip]
if the maximum_object_size_in_memory is reduced,
then I suppose squid's memory footprint will have to go down too,
which
makes the cache_mem option a bit useless.

I think will just store more objects in RAM.
I am sorry, but I don't understand that logic.

If I set cache_mem (which is supposed to be the limit of ram squid is
going to use for caching), then the maximum_object_size_in_memory should
be irrelevant. The *number* of objects to be placed in memory should
depend on cache_mem, not the other way around.

You're wrong.
Each object that squid puts into cache_mem can have a different size.
Thus the number of objects stored in cache_mem will vary over time
depending on the traffic and selection algorithms.
>>
I don't see how I am wrong in what I've posted above.

You wrote:
"if the maximum_object_size_in_memory is reduced,
then I suppose squid's memory footprint will have to go down too,
which makes the cache_mem option a bit useless."

(Perhaps you should've written: which *would make* the cache_mem option a bit useless.)

I haven't made real-life measurements to test how maximum_object_size_in_memory affects squid memory footprint, but my feeling is that lowering it woud *not* decrease memory usage. I would expect instead an *increase* in total memory consumption because more objects in cache_mem would mean more memory used for the indexes needed to manage them.

Smaller maximum_object_size_in_memory *raises* the memory usage. As the cache_mem is the fixed baseline amount of memory used for RAM cache, and each object uses a indexing memory structure as well. Limiting memory to storing smaller objects means more of them fit into the same fixed-size cache_mem and thus require more index entries ... with more overheads.



> I am not saying
that the number of objects placed in ram will be constant, all I am
saying is that the total memory used of all objects placed in ram should
not be 6 times the cache_mem value I've specified in my configuration
file - that is simply wrong, no matter how you twist it.

What currently seems to happen is that cache_mem is completely ignored
and squid is trying to shove up as many objects into my ram as possible,
to the point where nothing else on that machine is able to function
nominally. This is like putting cart in front of the horse - ridiculous!

As stated elsewhere, previous versions of squid had memory leaks. That
doesn't mean squid is _designed_ to put as many objects in ram as
possible.
Well, as I indicated previously, my cache_mem is 200MB. Current memory
usage of squid was 1.3GB - more than 6 times what I have indicated. That
is not a simple memory "leak" - that is one hell of a raging torrent if
you ask me!

I agree. Unfortunately there were a few of those introduced in the 3.2 development code. They are now polished out by the QA process.

Mr Dash Four, you said 3.2.0 version? We never released a three-numeric version ending in '0', because the '0' set are the development beta releases, there is always a fourth numeric indicatign what beta release it is. QA process is only completed and thing like memory leaks expected to be removed when we reached 3.2.1 release.


Also, the cache_mem value must not be confused with a hard limit on
total squid memory usage (which AFAIK cannot be set). For example
there's also the memory used to manage the on-disk cache (10MB per GB
IIRC - google it for a reliable answer).
Even if we account for that, I don't see why squid should be occupying 6
times more memory from what I restricted it to use.

This is what the official squid wiki has to say about this ratio:

"rule of thumb: cache_mem is usually one third of the total memory consumption."

But you see... it's just a "rule of thumb". Squid uses additional memory to manage on-disk cache. Again, from the squid memory page:

"10 MB of memory per 1 GB on disk for 32-bit Squid
14 MB of memory per 1 GB on disk for 64-bit Squid"

So if you have a very large on-disk cache but specify a low cache_mem parameter, the 6:1 ratio can be easily exceeded.

Also Squid requires up to 256 KB per client transaction (averages around 16KB per FD). So at 200MB with 4000 clients 1.3 GB can also easily be reached despite everything else memory-related being disabled.

I think this is what Alex was referring to earlier when he mentioned the problem may still exist (and not be a leak) even if you elimiated caching as a source of the problem. No need to be a developer to test this, just add "cache deny all" and "cache_mem 0" to the config file and see if the memor usage remains.



Suppose you specify cache_mem 32MB and have a 40GB cache_dir.
That would give (at least) 32MB + 40GB / 1GB * 10MB = 432MB.
432 / 32 = 13.5

I'm not saying this would be a sensible configuration, nor denying there's an actual problem in your case. Plus, I'm not claiming I would be able to predict a squid instance memory usage (I prefer to graph that over time with munin). It's just that IMVHO you're barking at the wrong tree

:-)


NP: and no shame in that, most people here do it. Myself included.


Cheers
Amos
(melding back into the woodwork for a few days.)




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux