Search squid archive

Re: squid on 32-bit system with PAE and 8GB RAM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Gavin McCullagh ha scritto:
Hi,

we're running a reasonably busy squid proxy system here which peaks at
about 130-150 requests per second.
The OS is Ubuntu Hardy and at the minute, I'm using the packaged 2.6.18
squid version.  I'm considering a hand-compile of 2.7, though it's quite
nice to get security patches from the distro.
We have 2x SATA disks, a 150GB and a 1TB.  The linux system is on software
RAID1 across the two disks.  The main cache is 600GB in size on a single
non-RAID 970GB partition at the end of the 1TB disk.  A smaller partition
is reserved on the other disk as a secondary cache, but that's not in use
yet and the squid logs are currently written there.  The filesystems for
the caches are reiserfs v3 and the cache format is AUFS.
We've been monitoring the hit rates, cpu usage, etc. using munin.   We
average about 13% byte hit rate.  Iowait is now a big issue -- perhaps not
surprisingly.  I had 4GB RAM in the server and PAE turned on.  I upped this
to 8GB with the idea of expanding squid's RAM cache.  Of course, I forgot
that the squid process can't address anything like that much RAM on a
32-bit system.  I think the limit is about 3GB, right?

I have two questions.  Whenever I up the cache_mem beyond about 2GB, I
notice squid terminates with signal 6 and restarts as the cache_mem fills.
I presume this is squid hitting the 3GB-odd limit?  Could squid not behave
a little more politely in this situation -- either not attempting to
allocate the extra RAM, giving a warning or an error?

My main question is, is there a sensible way for me to use the extra RAM?
I know the OS does disk caching with it but with a 600GB cache, I doubt
that'll be much help.  I thought of creating a 3-4GB ramdisk and using it
as a volatile cache for squid which gets re-created (either by squid -z or
by dd of an fs image) each time the machine reboots.  The things is, I
don't know how squid addresses multiple caches.  If one cache is _much_
faster but smaller than the other, can squid prioritise using it for the
most regularly hit data or does it simply treat each cache as equal?  Are
there docs on these sorts of issues?

Any suggestions would be most welcome.

Gavin




From my little experience I would suggest that you give squid cache_mem a value of just some hundreds of MBs, and let the other GBs of ram to squid for indexes and the OS for disk caching. I guess after some time this will take you near a ramdisk-only setup. Also, this would move the problem of accessing a very large ram address space from squid (which being only 32-bit can lead to problems) to the OS, which IMHO is better suited for this task.

Also, I don't understand why spending so much on memory instead of buying some more spindles to have a more balanced server in the end (maybe space constraints ?)

Just my 2 cents.

--
Marcello Romani

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux