On 25/07/2013 1:05 a.m., Golden Shadow wrote:
Hi there!
My squid is installed on a server with 192 GB of RAM. I have the following directives in squid.conf:
cache_mem 143360 MB
maximum_object_size_in_memory 300 KB
memory_replacement_policy heap GDSF
memory_pools on
memory_pools_limit 1024 MB
ipcache_size 2048
ipcache_low 90
ipcache_high 95
fqdncache_size 2048
top reports that my squid process size is 20GB, which is far less than my RAM size, but nevertheless I still find some page faults (about 70 page faults over 2 hours). I'm wondering how could those page faults are occurring while squid process size is far less than my RAM size. How can I eliminate those time consuming page faults?
Two things here.
Why is the process size only 20GB? you have a 143GB memory cache as part
of that RAM consumption by Squid. Perhapse your traffics real caching
requirement is far smaller than you are allowing storage for.
What exactly is the page faulting comign from though ... Squid or the OS?
If it is Squid, why would the OS have swapped that piece of memory out
to VM in the first place? perhapse something else is needing a chunk of
memory larger than Squid leaves available?
My second question, am I using correct values for the memory-related directives mentioned above? If no, I would really appreciate if you could suggest the correct values.
Any values you want are "correct", so long as they fit within the
machines limits and do not lead to the system swapping.
My last question is about read_ahead_gap, whose default value is only 16 KB. Would increasing this value to let's say 32 KB or 64 KB increase the performance since I have high RAM on the server?
Perhapse. That is a buffer size more related to your network speed. Each
concurrent connection consumes up to that much RAM for buffers. If you
have clients that can drain 32KB or 64KB fast enough not to cause waves
or bursts in traffic it can be worthwhile raising it a bit. If you have
slow clients the reverse can be true.
Amos