Chris Robertson wrote:
-----Original Message-----
From: Henrik Nordstrom [mailto:hno@xxxxxxxxxxxxxxx]
Sent: Monday, February 07, 2005 11:58 AM
To: Chris Robertson
Cc: squid-users@xxxxxxxxxxxxxxx
Subject: RE: [squid-users] Help..
On Mon, 7 Feb 2005, Chris Robertson wrote:
Not entirely true. There is a benefit on a multi-processor box. Squid,
being a single threaded application can't natively take advantage of
multiple processors. Running multiple instances of squid is beneficial
in
such a situation.
If CPU usage is your main bottleneck (most often it is not the main
bottleneck)
Regards
Henrik
With high latency, squid seems to eat CPU with impunity.
http://mrtg.schoolaccess.net/squid/
~70 requests/sec, ~850KB/sec, nearly 50% CPU on a Xeon 3GHz w/2GB RAM and
very little in the way of ACLs:
http_port 8080
cache_peer proxy2.schoolaccess.net sibling 8080 3130 proxy-only no-digest
cache_peer proxy3.schoolaccess.net parent 8080 3130 round-robin proxy-only
no-digest
cache_peer proxy3.schoolaccess.net parent 8081 3131 round-robin proxy-only
no-digest
hierarchy_stoplist
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
cache_mem 32 MB
maximum_object_size 10 KB
cache_dir ufs /cache1 3072 16 256
cache_dir ufs /cache2 3072 16 256
cache_access_log /usr/local/squid/logs/access.log
cache_log /usr/local/squid/logs/cache.log
cache_store_log none
cache_swap_log /usr/local/squid/logs/swap.log
pid_filename /usr/local/squid/logs/squid.pid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
negative_ttl 30 seconds
negative_dns_ttl 30 seconds
half_closed_clients off
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl Corp src xxx.xxx.xxx.xxx/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl mrtg src xxx.xxx.xxx.xxx/32
acl snmppublic snmp_community public
http_access allow manager localhost
http_access allow manager Corp
http_access deny manager
http_access deny to_localhost
http_access allow all
icp_access allow all
cache_mgr schoolaccess@xxxxxxx
cache_effective_user squid
cache_effective_group squid
log_icp_queries off
icp_hit_stale on
acl snmppublic snmp_community public
snmp_access allow snmppublic localhost
snmp_access allow snmppublic mrtg
snmp_access deny all
nonhierarchical_direct off
strip_query_terms off
coredump_dir /usr/local/squid/cache
Most of the requests to these servers are over satellite (~600ms latency)
through squid2.5Stable7 servers, and that seems to make a huge difference.
FWIW, proxy1 and proxy2 are running RHLinux 9, proxy3 is running FreeBSD
5.2, and has 4GB of RAM. All three are on the same switch, and only a
single router hop (over ethernet) to the fibre. Access to the cache is
limited via a firewall. The MRTG graphs for proxy3 are using combined
statistics for the two squid processes running on it (as it's a dual proc
box). Running top shows that about 2/3rds of squid's CPU usage is "system"
vs. "user" on all three boxes. The select loop takes around 4ms to execute
on all three boxes. *shrug*
Perhaps it's not a issue with squid itself. I'm not too concerned, as it
works well, and overall surfing is faster with squid than without (due to
the on-site caches), and all traffic flows by the filtering servers (due to
the central caches).
At some point in the future, I'm likely going to turn this lot into a LVS,
with a pair of smaller (cheaper) boxes acting as a redundant front-end
controller.
Chris
LVS , that's what im thinking about atm, :)
regards