Search squid archive

Re: Squid memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Fantastic explanation! Thanks heaps Amos. Would it make sense for this
to go onto wiki.squid-cache.org somewhere?
--
Nathan Hoad
Software Developer
www.getoffmalawn.com


On Wed, May 29, 2013 at 7:00 PM, Amos Jeffries <squid3@xxxxxxxxxxxxx> wrote:
> On 29/05/2013 2:19 p.m., Nathan Hoad wrote:
>>
>> On Tue, May 28, 2013 at 3:23 PM, Amos Jeffries <squid3@xxxxxxxxxxxxx>
>> wrote:
>>>
>>> On 28/05/2013 3:59 p.m., Nathan Hoad wrote:
>>>
>>> I take it you are referring to the 2.0g resident size?
>>
>> That is what I'm referring to, yes - the resident size has increased
>> to 2.5g since my previous mail, virtual to 2.6g.
>>
>>> 1GB is within the reasonable use limit for a fully loaded Squid under
>>> peak
>>> traffic. The resident size reported is usually the biggest "ever" size of
>>> memory usage by the process.
>>>
>>> FWIW: The memory report shows about 324MB being tracked by Squid as
>>> currently in use for other things than cache_mem with 550 active clients
>>> doing 117 transactions at present. The client transaction related pools
>>> show
>>> that the current values are 1/3 of peak traffic, so 3x 360MB ==> ~1GB
>>> under
>>> peak traffic appears entirely possible for your Squid.
>>
>> Out of interest, how did you come to the 324MB? I'd be interested in
>> learning how to read the output a bit better :)
>
>
> Okay. (for anyone reading the report is a TSV format [open in Libre Calc or
> Excel as Tab-separated columns]).
>
> The final row of the big table is Totals of all rows above. I took the
> 1360314 from Allocated section "(KB)" column [~1360 MB] and subtracted 1GB /
> 1024MB. That is the current Total memory usage Squid is aware of either in
> active use or waiting re-allocation, minus what you said cache_mem was
> configured to.
>
> NP: Before subtracting I did a quick check of the mem_node (ie cache_mem
> memory 'pages') there is ~1025 MB. Enough for the full 1024 MB cache_mem and
> some extra MB of items in-transit right now that are not cacheable - which
> use mem_nodes as well.
>
>
> Using a bit of developer inside-knowledge I identify in the Allocated
> section "(#)" column the main state objects which are allocated one for each
> client connection:
> * "cbdata ConnStateData" shows 2749 - at 1 per currently open client TCP
> connection.
> * "cbdata ClientHttpRequest " shows 550 - at 1 each per client HTTP request.
>   ++ sorry I got that wrong earlier myself.
> * "cbdata ClientReplyContext" also shows 550  - at 1 each per currently
> underway client HTTP responses.
>
> These objects also give me the details to estimate current versus peak
> traffic memory requirements. For example:
>  cbdata ClientReplyContext shows 550 current allocated, using 35269 KB, with
> a highest-ever allocation of 100933 KB - roughy 3x the current memory usage.
>
>
> The difference between Allocated "(KB)" column and "high (KB)" columns shows
> us how much is allocated now versus the highest ever allocated. A leak
> usually shows up as both those columns being nearly identical values,
> although it is possible that 1:10 objects leaks or something weird like that
> which can hide it.
>
> I may have to change my reading a bit. Thinking about that column meaning
> now I see the Total value for allocations "high (KB)" only shows 1462831 KB
> - its not very accurate, but does show that if all objects were reaching
> their max at the same time it would still be ~500MB short of that 2GB the
> system reports. Another option that usually adds fuzz to these numbers is
> spawning of helpers - the fork() used for that allocates the child process a
> whole duplicate of the current memory space usage of the parent process.
> Effectively doubling the OS-reported memory values from whatever the reality
> is.
>
>
> Like Alex said, leaks show up as an ever-increasing value in these numbers
> somewhere. Regular snapshots of that report and the system values taken
> across a week or two should be able to show if there is anything constantly
> growing at a regular rate.
>
> Amos




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux