On Sun, 3 Apr 2011 13:41:22 -0400, squid@xxxxxxxxxxxxxxxxxxxxxxx wrote:
Good day,
I have 4GB RAM install in my squid server.
After increasing the RAM "maximum resident size" due high page faults
and
reconfigure squid a "WARNING: Very large
maximum_object_size_in_memory
settings can have negative impact on performance" was displayed.
No, no. That setting does not affect the problem like that.
Resident size is not something Squid can easily affect directly.
Have a read through http://wiki.squid-cache.org/SquidFaq/SquidMemory to
learn how Squid uses memory and what things can be adjusted to affect
that.
You need to ensure that when squid is not running the operating system
says "free available memory" is a bigger number than the Squid maximum
resident size. And that when Squid is running the amount of virtual or
"swap" memory reported by the operating system is zero.
More on that below. But please read that wiki page before continuing,
the answers below will make a lot more sense when you know the
background ideas.
What is the implication of this warning, any danger?
The setting you changed is the limit on *individual* objects stored in
memory. The problems referred to are the swapping ones you are already
seeing before the change. The change may make them randomly even worse
than before.
See below for more information and excerpts from my squid.conf
Regards,
Yomi.
C:\squid\sbin>squid -n squid -k reconfigure
2011/04/03 17:46:15| WARNING: Very large
maximum_object_size_in_memory
settings
can have negative impact on performance
Status of squid Service:
Service Type: 0x10
Current State: 0x4
Controls Accepted: 0x5
Exit Code: 0
Service Specific Exit Code: 0
Check Point: 0
Wait Hint: 0
# MEMORY CACHE OPTIONS
---------------------------------------------------------------------------
#Default:
cache_mem 1024 MB
Hmm, 4GB of RAM on the system and you are dedicating 25% of it to a RAM
cache for Squid.
#Default:
maximum_object_size_in_memory 131072 KB
This can at most be set to the same as cache_mem. Though generally you
want many HTTP objects in the RAM cache.
#Default:
# memory_replacement_policy lru
# DISK CACHE OPTIONS
#
--------------------------------------------------------------------------
#Default:
# cache_replacement_policy lru
#Default:
cache_dir ufs c:/squid/var/cache 40960 128 512
cache_dir ufs d:/squid/var/cache 20480 128 512
cache_dir ufs e:/squid/var/cache 5120 128 512
cache_dir ufs f:/squid/var/cache 20480 128 512
Using the rule-of-thumb 1MB/GB estimate those dir need 880 MB of memory
for their indexes. Plus the cache_mem RAM cache. That gives up to 2GB of
RAM consumed by Squid before any clients start connecting. More for
traffic handling.
Also, I believe the aufs scheme uses windows disk IO threading. You
could possibly avoid some of the disk speed problems by changing those
types to aufs (just a Squid restart needed to switch).
Also check that those sizes are leaving at least 10% of the disk space
free on each for the cache swap.state journals. If the disk is filled
that would result in those unable to save swap.state which your shutdown
sees.
Also check that the disks are not being put into any kind of standby or
hibernate mode underneath Squid. I suspect that would lead to the
"resource unavailable" messages your logs show.
#Default:
# cache_swap_low 90
# cache_swap_high 95
With caches >20GB I would change that low-threshold 90 to a 94. That
minimizes the period when the background garbage collection can drain
speed.
There is nothing there to indicate a box with 4GB RAM would swap badly.
So I conclude there must be other software hogging memory and reducing
the amount available to Squid. Removing that other software would be a
good thing for performance.
Amos