We're using IPVS/LVS in our configuration. We have 13 squid
instances running, 10 running debian w/ 32bit squid and 3 running on
solaris 9 on sun netra x1. The netras are for low traffic stuff.
The 2 load balancers are dell 1U boxes have quad intel nics running
debian and packages from ultra monkey. Uptime's been over a year w/o
an problems.
Our layout:
world-> load balancers -> squid pool -> cgi servers
mike
At 05:01 PM 12/17/2007, Amos Jeffries wrote:
> Hello,
>
> I am looking to utilize squid as a reverse proxy for a medium sized
> implementation that will need to scale to a lot of requests/sec (a lot
> is a relative/unknown term). I found this very informative thread:
> http://www.squid-cache.org/mail-archive/squid-users/200704/0089.html
>
> However, is clustering the OS the only way to provide a high
> availability (active/active or active/standby) solution? For
> example, with Red Hat Cluster Suite. Here is a rough drawing of my
> logic:
> Client --- > FW ---> Squid ---> Load Balancer ---> Webservers
>
> They already have expensive load balancers in place so they aren't
> going anywhere. Thanks for any insight!
>
IIRC there has been some large-scale sites setup using CARP in grids
between squid sibling acelerators. The problem we have here is that few of
the large-scale sites share their configurations back to the community.
If you are doing any sort of scalable I'd suggest looking at the
ICP-multicast and CARP setup for bandwidth scaling.
Squid itself does not include any means of failover for connected clients
if an individual cache dies. That is up to the
FW/router/switch/loadbalancer between squid and clients. All squid can do
it restart itself quickly when something major occurs.
Amos