Search squid archive

Re: Squid Memory and Page Faults

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Amos, Eliezer and Markus for your replies!

@Eliezer: The server has 2 X 2.7 GHz CPUs, each with 12 cores. Squid version is 3.3.7 compiled from source and I'm running only one squid worker.

@Marcus: What is maximum process size that TLB can address? Is it tunable? My operating system is CENTOS 6.4 with Kernel 2.6.34. 

@Amos: Please see my comments below:


----- Original Message -----
From: Amos Jeffries <squid3@xxxxxxxxxxxxx>
To: squid-users@xxxxxxxxxxxxxxx
Cc: 
Sent: Wednesday, July 24, 2013 9:58 PM
Subject: Re:  Squid Memory and Page Faults

On 25/07/2013 1:05 a.m., Golden Shadow wrote:
>> Hi there!
>>
>> My squid is installed on a server with 192 GB of RAM. I have the following directives in squid.conf:
>>
>> cache_mem 143360 MB
>> maximum_object_size_in_memory 300 KB
>> memory_replacement_policy heap GDSF
>>
>> memory_pools on
>> memory_pools_limit 1024 MB
>>
>> ipcache_size 2048
>> ipcache_low 90
>> ipcache_high 95
>>
>> fqdncache_size 2048
>>
>>
>>
>> top reports that my squid process size is 20GB, which is far less than my RAM size, 
>> but nevertheless I still find some page faults (about 70 page faults over 2 hours). 
>> I'm wondering how could those page faults are occurring while squid process size is far 
>> less than my RAM size. How can I eliminate those time consuming page faults?

>Two things here.
>
>Why is the process size only 20GB? you have a 143GB memory cache as part 
>of that RAM consumption by Squid. Perhapse your traffics real caching 
>requirement is far smaller than you are allowing storage for.

Well, according to cache manager my squid RSS reached 173 GB at some time!

>What exactly is the page faulting comign from though ...  Squid or the OS?

How can I tell? Cache manager itself reports these page faults, so I guess they are coming from squid.

>If it is Squid, why would the OS have swapped that piece of memory out 
>to VM in the first place? perhapse something else is needing a chunk of 
>memory larger than Squid leaves available?


I'm puzzled as well! I don't have any other process on that server that would take more than a few MBs!
Do you think if I swapoff my swap, I could eliminate these page faults? After all, I have enough 
physical memory to run the server without swap.

>> My second question, am I using correct values for the memory-related directives mentioned above? 
>> If no, I would really appreciate if you could suggest the correct values.

>Any values you want are "correct", so long as they fit within the 
>machines limits and do not lead to the system swapping.


I see, what would you recommend for memory_pools_limit on a server with 192 GB of RAM?

>> My last question is about read_ahead_gap, whose default value is only 16 KB.
>> Would increasing this value to let's say 32 KB or 64 KB increase the performance
>> since I have high RAM on the server?

>Perhapse. That is a buffer size more related to your network speed. Each 
>concurrent connection consumes up to that much RAM for buffers. If you 
>have clients that can drain 32KB or 64KB fast enough not to cause waves 
>or bursts in traffic it can be worthwhile raising it a bit. If you have 
>slow clients the reverse can be true.

>Amos



Best regards,
Firas






[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux