GIGO . wrote:
I have successfully setup running of multiple instances of squid for the sake of surviving a Cache directory failure. However I still have few confusions regarding peering multiple instances of squid. Please guide me in this respect.
In my setup i percept that my second instance is doing caching on behalf of requests send to Instance 1? Am i correct.
You are right in your understanding of what you have configured. I've
some suggestions below on a better topology though.
what protocol to select for peers in this scenario? what is the recommendation? (carp, digest, or icp/htcp)
Under your current config there is no selection, ALL requests go through
both peers.
Client -> Squid1 -> Squid2 -> WebServer
or
Client -> Squid2 -> WebServer
thus Squid2 and WebServer are both bottleneck points.
If syntax of my cache_peer directive is correct or local loop back address should not be used this way?
Syntax is correct.
Use of localhost does not matter. It's a useful choice for providing
some security and extra speed to the inter-proxy traffic.
what is the recommended protocol for peering squids with each other?
Does not matter to your existing config. By reason of the "parent"
selection.
what is the recommended protocl for peering squid with ISA Server.
"parent" is the peering method for origin web servers. With
"originserver" selection method.
Instance 1:
visible_hostname vSquidlhr
unique_hostname vSquidMain
pid_filename /var/run/squid3main.pid
http_port 8080
icp_port 0
snmp_port 3161
access_log /var/logs/access.log
cache_log /var/logs/cache.log
cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query proxy-only no-delay
prefer_direct off
cache_dir aufs /var/spool/squid3 100 256 16
coredump_dir /var/spool/squid3
cache deny all
Instance 2:
visible_hostname SquidProxylhr
unique_hostname squidcacheprocess
pid_filename /var/run/squid3cache.pid
http_port 3128
icp_port 0
snmp_port 7172
access_log /var/logs/access2.log
cache_log /var/logs/cache2.log
coredump_dir /cache01/var/spool/squid3
cache_dir aufs /cache01/var/spool/squid3 50000 48 768
cache_swap_low 75
cache_mem 1000 MB
range_offset_limit -1
maximum_object_size 4096 MB
minimum_object_size 12 bytes
quick_abort_min -1
What I suggest for failover is two proxies configured identically:
* a cache_peer "sibling" type between them. Using digest selection. To
localhost (different ports)
* permitting both to cache data from the origin (optionally from the
peer).
* a cache_peer "parent" type to the web server. With "originserver"
and "default" selection enabled.
This topology utilizes a single layer of multiple proxies. Possibly with
hardware load balancing in iptables etc sending alternate requests to
each of the two proxies listening ports.
Useful for small-medium businesses requiring scale with minimal
hardware. Probably their own existing load balancers already purchased
from earlier attempts. IIRC the benchmark for this is somewhere around
600-700 req/sec.
The next step up in performance and HA is to have an additional layer of
Squid acting as the load-balancer doing CARP to reduce cache duplication
and remove sibling data transfers. This form of scaling out is how
WikiMedia serve their sites up.
It is documented somewhat in the wiki as ExtremeCarpFrontend. With a
benchmark so far for a single box reaching 990 req/sec.
These maximum speed benchmarks are only achievable by reverse-proxy
people. Regular ISP setups can expect their maximum to be somewhere
below 1/2 or 1/3 of that rate due to the content diversity and RTT lag
of remote servers.
Amos
--
Please be using
Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
Current Beta Squid 3.1.0.18