Gavin McCullagh wrote:
Hi,
we're running a reasonably busy squid proxy system here which peaks at
about 130-150 requests per second.
The OS is Ubuntu Hardy and at the minute, I'm using the packaged 2.6.18
squid version. I'm considering a hand-compile of 2.7, though it's quite
nice to get security patches from the distro.
FYI: The latest Intrepid or Jaunty package should work just as well in
Hardy.
We have 2x SATA disks, a 150GB and a 1TB. The linux system is on software
RAID1 across the two disks. The main cache is 600GB in size on a single
non-RAID 970GB partition at the end of the 1TB disk. A smaller partition
is reserved on the other disk as a secondary cache, but that's not in use
yet and the squid logs are currently written there. The filesystems for
the caches are reiserfs v3 and the cache format is AUFS.
We've been monitoring the hit rates, cpu usage, etc. using munin. We
average about 13% byte hit rate. Iowait is now a big issue -- perhaps not
surprisingly. I had 4GB RAM in the server and PAE turned on. I upped this
to 8GB with the idea of expanding squid's RAM cache. Of course, I forgot
that the squid process can't address anything like that much RAM on a
32-bit system. I think the limit is about 3GB, right?
For 32-bit I think it is yes. You can rebuild squid as 64-bit or check
the distro for a 64-bit build.
However keep this in mind: rule-of-thumb is 10MB index per GB of cache.
So your 600 GB disk cache is likely to use ~6GB of RAM for index +
whatever cache_mem you allocate for RAM-cache + index for RAM-cache + OS
and application memory.
I have two questions. Whenever I up the cache_mem beyond about 2GB, I
notice squid terminates with signal 6 and restarts as the cache_mem fills.
I presume this is squid hitting the 3GB-odd limit? Could squid not behave
a little more politely in this situation -- either not attempting to
allocate the extra RAM, giving a warning or an error?
cache.log should contain a FATAL: message and possibly a line or two
beforehand about why and where the crash occured.
Please can you post that info here.
My main question is, is there a sensible way for me to use the extra RAM?
I know the OS does disk caching with it but with a 600GB cache, I doubt
that'll be much help.
RAM swapping (disk caching by the OS) is one major performance killer.
Squid needs direct access to all its memory for fast index searches and
in-transit processing.
I thought of creating a 3-4GB ramdisk and using it
as a volatile cache for squid which gets re-created (either by squid -z or
by dd of an fs image) each time the machine reboots. The things is, I
don't know how squid addresses multiple caches. If one cache is _much_
faster but smaller than the other, can squid prioritise using it for the
most regularly hit data or does it simply treat each cache as equal? Are
there docs on these sorts of issues?
No need that is already built into Squid. cache_mem defines the amount
of RAM-cache Squid uses.
Squid allocates the disk space based on free space and attempts to
spread the load evenly over all dirs to minimize disk access/seek times.
cache_mem is used for the hottest objects to minimize delays even further.
Amos
--
Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
Current Beta Squid 3.1.0.6