Steve Snyder wrote:
Have you tried reiserfs with noatime,notail? That's said to be among the fastest performers among mainstream filesystems for Squid.
I have to think that all this talk of the underlaying file system is missing the point of the question. Why would ext2 give one result with diskd and another result with aufs? I'd think that if the access times changed with a change in file system, then the file system would likely be to blame. My question would be, are you SURE that you applied the epoll patch (and ran the bootstrap.sh successfully)? Squid will take options that it doesn't understand without complaint. Check the output of "./configure --help" to be sure.
Chris
On Tuesday 27 June 2006 3:09 pm, O'Brien, Kevin wrote: > Unfortunately we already use ext2 and noatime, but thanks for the > suggestion. > > -=Kevin=- -----Original Message----- From: Mike Rambo > [mailto:mrambo@xxxxxxxxxxxxx] Sent: Tuesday, June 27, 2006 9:33 AM > To: Squid Users List Subject: Re: RE: Increased > service times using aufs vs diskd > > O'Brien, Kevin wrote: > >> No takers? >> >> The other interesting thing is that the service times increase as >> traffic decreases. Any theories on that?
Questions such as this are likely better served by the squid-dev mailing list. You are starting to question the internal workings of the aufs store type. Very few on this list are going to be able to answer questions regarding the inner workings of the code.
>> >> -=Kevin=- _____________________________________________ Sent: >> Thursday, June 22, 2006 1:22 PM To: 'squid-users@xxxxxxxxxxxxxxx' >> Subject: Increased service times using aufs vs diskd >> >> I'm using squid as an accelerator and I switched my cache_dir >> from diskd used because server is SMP) to aufs because of various >> bugs in the diskd code (761, 1500). However, when I make the >> switch (and clear the cache_dir contents) the overall, hit, miss,
I have to ask here, why did you clear out the cache_dir? ufs, aufs and diskd all use the same storage format (as stated in squid.conf). *shrug* No matter.
>> and near miss service times increase by almost 10 times. Using >> diskd, the 24 hour average for overall, hit, and near miss is 4ms >> and near miss is 1ms. After the switch, the times rocket up to >> 44ms, 43ms, 49ms, and 45ms for overall, hit, miss, and near miss. >> I am wondering if this is just a function of the squid process >> now handling disk requests or is an indication of another problem >> (although ~40ms is probably not much of a problem). >> >> Here's the details of the system: OS: RHEL4 Squid: 2.5.stable14 >> with epoll patch Build options: ./configure --enable-epoll >> --enable-snmp --enable-removal-policies=heap,lru >> --enable-storeio=aufs,diskd,ufs --with-pthreads >> --enable-cachemgr-hostname=localhost --disable-ident-lookups >> --enable-truncate --enable-cache-digests --enable-htcp
Chris