Search squid archive

Re: High memory usage under load with caching disabled, memory is not being freed even with no load

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Amos,

> > From what i remember there is a calculation for how much k per conn
> > should squid use.
>
> Aye;
>  256KB * number of currently open FD
>  + read_ahead_gap
>  + received size of current in-transit response (if cacheable MISS)

I tried to reduce the number of in-memory objects using following (I
read somewhere that 4KB is the minimum block of memory that squid can
allocate):
  cache_mem 8 MB
  maximum_object_size 4 KB
  maximum_object_size_in_memory 4 KB
  read_ahead_gap 4 KB

But the above settings did not help much.

At the moment I am running a much lighter load on the squid VM to see
how it behaves.

So, right now, the machine has about 110.000 open TCP connections (I
guess half are from clients and the other half is to the Internet,
which my firewall also confirms). It has been running like this for
the last 4 hours or so.

Here is the situation (in the attachment you will find full command
print-outs and config file):

- Running squid 4.12 from diladele repository on Ubuntu 18.04 LTS

- RAM used: around 9 GB out of 16 GB (no of swap is used)

- I am running 2 squid workers at the moment (see attached squid.conf)

- Top reports this (removed other processes, they practically have no
impact on the memory listing):

top - 10:22:03 up 18:00,  1 user,  load average: 1.88, 1.58, 1.43
Tasks: 169 total,   1 running,  94 sleeping,   0 stopped,   0 zombie
%Cpu(s): 13.6 us,  9.6 sy,  0.0 ni, 72.7 id,  0.1 wa,  0.0 hi,  3.9 si,  0.0 st
KiB Mem : 16380456 total,  5011560 free,  9205660 used,  2163236 buff/cache
KiB Swap: 12582904 total, 12582904 free,        0 used.  7121124 avail Mem

   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
  1515 proxy     20   0 5713808 4.396g  14188 S  36.5 28.1 130:03.34 squid
  1514 proxy     20   0 4360348 3.329g  14380 S  28.9 21.3 104:15.21 squid

 - mgr:info show some weird stats:
    Number of clients accessing cache:      156   (which is exactly
twice the number of actual clients, but this is probably due to the
number of workers)
    Maximum Resident Size: 32467680 KB   (which is 32GB. At no time
during these 4 hours has this value of RAM consumption ever been
reached. The memory is steadily, but slowly increasing to where it is
now - at 9 GB. I have no idea what this value is.)

I have no idea if the rest of the stats from mgr:info are OK or not, I
really have no way of checking that.

I added to the configuration memory pools option, we will see if it
helps (I think I already tried this, but I can not be sure, I ran a
lot of tests trying to fix this myself before I reached out to you):
memory_pools off

If there is anything else I can do to help with debugging this, please
let me know.

Thank you for your time and help,
Ivan


On Fri, Aug 7, 2020 at 2:23 AM Amos Jeffries <squid3@xxxxxxxxxxxxx> wrote:
>
> On 6/08/20 11:06 am, NgTech LTD wrote:
> > Hey Ivan,
> >
> > From what i remember there is a calculation for how much k per conn
> > should squid use.
>
> Aye;
>  256KB * number of currently open FD
>  + read_ahead_gap
>  + received size of current in-transit response (if cacheable MISS)
>
>
> > another thing is that squid is not returning memory once ot took it.
>
> The calculation for this is _minimum_ 5MB per type of memory allocating
> object is retained by Squid for quick re-use. The mgr:mem report lists
> details of those allocations.
>
>
>
> Alex didn't mention this earlier but what I am seeing in your "top" tool
> output is that there are 5x 'squid' processes running. It looks like 4
> of them are SMP worker or disker processes each using 2.5GB of RAM.
>
> The "free" tool is confirming this with its report of "used: 10G" (4x
> 2.5GB) of memory actually being used on the machine.
>
> Most kernels fork() implementation is terrible with virtual memory
> calculations. Most of that number will never actually be used. So they
> can be ignored so long as the per-process number does not exceed the
> actual physical RAM installed (beyond that kernel refuses to spawn with
> fork()).
>  The numbers your tools are reporting are kind of reasonable - maximum
> about 7GB *per process* allocated.
>
>
> The 41GB "resident size" is from old memory allocation APIs in the
> kernel which suffer from 32-bit issues. When this value has odd numbers
> and/or conflicts with the system tools - believe the tools instead.
>
>
>
> So to summarize; what I am seeing there is that during *Peak* load times
> your proxy workers (combined) are *maybe* using up to 41GB of memory. At
> the off-peak time you are doing your analysis reports they have dropped
> down to 10GB.
>  With one data point there is no sign of a memory leak happening. Just a
> normal machine handling far more peak traffic than its available amount
> of memory can cope with.
>
>  That is not to rule out a leak entirely. More measurements over time
> may show a pattern of increasing off-peak memory allocated. But just one
> comparison of peak vs off-peak is not going to reveal that type of pattern.
>
>
> Amos
> _______________________________________________
> squid-users mailing list
> squid-users@xxxxxxxxxxxxxxxxxxxxx
> http://lists.squid-cache.org/listinfo/squid-users

<<attachment: Proxy01_test_2020-08-07.zip>>

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux