Search squid archive

Re: Slow memory leak

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/26/2012 11:28 PM, tcr@xxxxxxxxxxxx wrote:
Regarding the config directives, I have tried changing the values of those in the past, but really the defaults are very conservative so they shouldn't be responsible for all this memory usage, unless the default is being ignored and treated as unbounded instead.

I would in fact love to set up a lot of in-mem caching, but I can't because this leak is killing me.

My hunch is that I've got some combination of config params which creates a very slow leak, but it's not well-known because it only shows up on very heavy-utilization servers. My 50-100Mbps throughput comes in the form of normal web traffic, i.e. lots and lots of page hits as opposed to a few large file downloads, so the requests per second are quite high. I just took a quick sample off one server and it was doing around 200 requests per second.

Further, I expect the leak is some little bit of mem that's getting allocated in the same piece of code over and over, and with gigabytes worth of leaked memory sitting in my processes, it should be easy to find once I get the proper debugging environment in place.

I am ready to help with debugging this leak. Would be great to get it patched.

Thanks
-Ty
I do remember that squid 3.1 had memory leak so if i'm not wrong this memory leak is in 3.1.10.

This package kind of standard for centos.
I have rpm package for fedora 16-17 of 3.2.1 but i'm not sure how it will work on centos 6.3

If you are up to compile from source it will be very simple to make a list of things you need and dont need on the server compilation options.

200 requests per second is not that much I must say.

it's not nothing but since I have seen couple servers work in a much higher lowed (more then 10k users per server) with squid 3.2.0.8 and 3.2.0.16 and never seen this kind leakage on them.

if you have about 200 Rps I would say that the first tweak is to allow about 32000 FD if not more. the basic Centos FD limits are lower then needed for this kind of server load.

look at the "Number of file desc currently in use:" just to get hold of the server load in a sense of FD.
also you can get extra data on it from mgr:filedescriptors

you can see if there are idle clients or whatever data.

It can be also something with access.log that you can disable for the moment to see if there is load on it

the squid rpm you are using dosnt have the option for large-files which in a big load should be there.

have you looked at the cache.log?
maybe the size of access.log?

how many times a week are you rotating the logs of the server?

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer <at> ngtech.co.il


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux