Ben Hollingsworth wrote:
I've got squid running as a reverse proxy, terminating HTTPS requests
and forwarding them to HTTP(S) servers on the inside. I've now gotten
a request to use this same proxy to load balance requests between
multiple internal servers. It looks like you can do this by
specifying two "cache_peer" lines with different IP's, and putting the
"round-robin" flag on them, like this:
cache_peer InsideIP1 parent 80 0 no-query originserver login=PASS
name=InsideName-peer round-robin
cache_peer InsideIP2 parent 80 0 no-query originserver login=PASS
name=InsideName-peer round-robin
Using this setup, what will happen if one of those servers goes down?
Will half of the requests fail, or will squid transparently resend the
request to the working server?
Is there any way to specify automatic connection persistence, where
all requests from a certain client will go to the same back end server
so as to maintain session state & the like? I don't want to split
them up manually using ACL's; I want squid to do this for me while
allowing for down servers (see above).
BTW, I'm running Squid 2.6.STABLE6 on RHEL 5.
begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:ben.hollingsworth@xxxxxxxxxxxx
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard