> Hello, > > I am looking to utilize squid as a reverse proxy for a medium sized > implementation that will need to scale to a lot of requests/sec (a lot > is a relative/unknown term). I found this very informative thread: > http://www.squid-cache.org/mail-archive/squid-users/200704/0089.html > > However, is clustering the OS the only way to provide a high > availability (active/active or active/standby) solution? For > example, with Red Hat Cluster Suite. Here is a rough drawing of my > logic: > Client --- > FW ---> Squid ---> Load Balancer ---> Webservers > > They already have expensive load balancers in place so they aren't > going anywhere. Thanks for any insight! > IIRC there has been some large-scale sites setup using CARP in grids between squid sibling acelerators. The problem we have here is that few of the large-scale sites share their configurations back to the community. If you are doing any sort of scalable I'd suggest looking at the ICP-multicast and CARP setup for bandwidth scaling. Squid itself does not include any means of failover for connected clients if an individual cache dies. That is up to the FW/router/switch/loadbalancer between squid and clients. All squid can do it restart itself quickly when something major occurs. Amos