'Morning Shawn,
Yeah, seems like we have a similar scenery!
We pushed a little hard on our refresh_pattern, so we're getting ~ 40%
of hit ratio.
Our typical load is 10k req/min, peaking 15k by 11am and 17pm. After
hours, the traffic drops to a insignificant value.
Try for a while a DNS LB on your system. Use the old one for this! And,
this is just great for maintenance...
I don't think that the logging is the CPU-consuming villain in your
machine... Believe me, winbindd, ntlm_auth and the ACLs are far more eager.
By midnight I rotate the logs, send a copy via FTP to other machine for
log parsing (we use SARG) and then compress the file. Every day 2, the
compressed logs are transferred via FTP to other machine, taped to backup
and deleted from Squid server. I don't retain long time ago logs on my
Squid's machine. At most, the last 30 days. All of this, with silly shell
scripts.
I have a old (~ three months old) squid.conf diff on my personal
homepage, if you want to check it out:
http://www.pt2rod.qsl.br/freebsdstuffs/acessorios/squid.conf.diff.htm
Best regards,
Rodrigo.
----- Original Message -----
From: "Shawn Wright" <swright@xxxxxxxxx>
To: <squid-users@xxxxxxxxxxxxxxx>
Sent: Thursday, March 30, 2006 2:17 AM
Subject: Re: Disk performance basics
Sounds like we have similar needs - all of our users are auth'd to an NT
domain (5 auth
processes), and a pretty decent list of ACLs, including ~300,000 denial
entries. How many
users do you support? What kind of peak/avg hits/sec do you see? What kind
of hit ratios do
you see? (we get ~16-20%) Google Earth and Google video are starting to
take a decent
chunk of bandwidth, even with 70kb/s delay pools.
I was wondering about the LB using DNS... this is something we've
considered doing. We've
done it short term several times when transitioning proxies, and it seemed
to work fine, but
we never left it in place for too long.
I'm noticing our CPU usage is climbing today, possibly due to the large
log file, which is now
over 2Gb in size. I must plan for a larger log volume...