Search squid archive

Re: squid on 32-bit system with PAE and 8GB RAM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

thanks for the reply.

On Tue, 17 Mar 2009, Amos Jeffries wrote:

> FYI: The latest Intrepid or Jaunty package should work just as well in  
> Hardy.

I'll look into this.  I tried to build the intrepid debian package from
source, but I came across a build dependency which was apparently not
available on hardy: libgssglue-dev.  I'll look into installing the
pre-built package, but I would've thought it would need newer version of
libraries.

In general, I'm looking for simple maintenance and patching, but not at the
expense of too much performance.  Would we benefit much from a hand-built
squid install?  In what way?

>> Of course, I forgot that the squid process can't address anything like
>> that much RAM on a 32-bit system.  I think the limit is about 3GB,
>> right?
>
> For 32-bit I think it is yes. You can rebuild squid as 64-bit or check  
> the distro for a 64-bit build.

The server hardware isn't 64-bit so surely I can't run a 64-bit squid
build, can I?

> However keep this in mind:  rule-of-thumb is 10MB index per GB of cache.
>
> So your 600 GB disk cache is likely to use ~6GB of RAM for index +  
> whatever cache_mem you allocate for RAM-cache + index for RAM-cache + OS  
> and application memory.

Ouch.  That's not a rule of thumb I'd seen anywhere.  I'm really not
observing it either.  Squid runs stabley for days with a 1.7GB cache_mem
and a 600GB disk cache. 

It may help that we're allowing large objects into the cache and using
"heap lfuda".  We plot the average object size with munin and it's about
90KB.  Presumably the 10MB per 1GB is strongly a function of average object
size.  
	http://deathcab.gcd.ie/munin/gcd.ie/watcher.gcd.ie.html#Squid

The drops in RAM usage are all due to squid restarting.  As long as I keep
the cache_mem below about 1.8-2GB

>> I have two questions.  Whenever I up the cache_mem beyond about 2GB, I
>> notice squid terminates with signal 6 and restarts as the cache_mem fills.
>> I presume this is squid hitting the 3GB-odd limit?  Could squid not behave
>> a little more politely in this situation -- either not attempting to
>> allocate the extra RAM, giving a warning or an error?
>
> cache.log should contain a FATAL: message and possibly a line or two  
> beforehand about why and where the crash occured.
> Please can you post that info here.

My apologies, there is a useful error, though in syslog not cache.log.

Mar 15 22:50:24 watcher squid[6751]: httpReadReply: Excess data from "POST http://im.studivz.net/webx/re";
Mar 15 22:52:50 watcher squid[6748]: Squid Parent: child process 6751 exited due to signal 6
Mar 15 22:52:53 watcher squid[4206]: Starting Squid Cache version 2.6.STABLE18 for i386-debian-linux-gnu...
Mar 15 22:52:53 watcher squid[4206]: Store logging disabled
Mar 15 22:52:53 watcher squid[4206]: Rebuilding storage in /var/spool/squid/cache2 (DIRTY)
Mar 15 22:54:29 watcher squid[4206]:    262144 Entries Validated so far.
Mar 15 22:54:29 watcher squid[4206]:    524288 Entries Validated so far.

I read this before and missed the "out of memory" error which appears in
the syslog:

Mar 15 22:52:50 watcher out of memory [6751]

this seems to happen every time:

Mar 10 11:58:12 watcher out of memory [22646]
Mar 10 17:52:03 watcher out of memory [24620]
Mar 11 00:57:52 watcher out of memory [31626]

>> My main question is, is there a sensible way for me to use the extra RAM?
>> I know the OS does disk caching with it but with a 600GB cache, I doubt
>> that'll be much help.
>
> RAM swapping (disk caching by the OS) is one major performance killer.  
> Squid needs direct access to all its memory for fast index searches and  
> in-transit processing.

Of course.  We definitely don't see any swapping to disk.  I watch our
munin memory graphs carefully for this.  What I mean is that the linux OS
does the opposite where RAM is unused -- it caches data in RAM, reads ahead
open files, etc. but this probably won't help much where the amount of data
on disk is very large.

	http://deathcab.gcd.ie/munin/gcd.ie/watcher.gcd.ie.html#System

>> I thought of creating a 3-4GB ramdisk and using it
>> as a volatile cache for squid which gets re-created (either by squid -z or
>> by dd of an fs image) each time the machine reboots.   The things is, I
>> don't know how squid addresses multiple caches.  If one cache is _much_
>> faster but smaller than the other, can squid prioritise using it for the
>> most regularly hit data or does it simply treat each cache as equal?  Are
>> there docs on these sorts of issues?
>
> No need that is already built into Squid. cache_mem defines the amount  
> of RAM-cache Squid uses.

Right, but if the squid process is hitting its 32-bit memory limit, I
can't increase this any more, can I?  This is why I'm suggesting a ramdisk
cache as that won't expand squid's internal memory usage.

> Squid allocates the disk space based on free space and attempts to  
> spread the load evenly over all dirs to minimize disk access/seek times.  
> cache_mem is used for the hottest objects to minimize delays even 
> further.

I suppose one way to artificially "prioritise" the ramdisk caches might be
to have n smaller ramdisk caches?  Squid would then split the load as
roughly 1/(n+1) between the disk cache and the n ramdisk caches?

Sorry, some of you may be scratching your heads and wondering why one would
do something so crazy.  I've just got 4GB RAM sitting moreorless idle,
a really busy disk and would like to use one to help the other :-)

Gavin


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux