Hi Pablo, Pablo Sanchez wrote: >> The only half-solution I have come up with so far is to define a 'director' >> box with the 'bob' alias, and then periodically grab load metrics from the >> participating hosts, determine of the 'bob's which is the least loaded, and >> then *cough* update a DNAT rule to redirect requests coming in for 'bob' to >> the least-loaded 'bobX'. > > I believe the above is what you'll want to implement. As your research has > probably already shown, the load balancers in the market are for HTTP. A > good load balancer will need to communicate with the backend clients so it > has data on load and other metrics necessary for it to make a decision on > which server to serve. You're right about load-balancing HTTP. Everybody and their dog wants to load-balance HTTP for some reason. ;) But my dog insists on load-balancing SSH. I have also found something called LVS, but as I've mentioned in another post this is unsuitable for us. Grabbing the load data as I've said is no problem--the default SNMP daemon provides CPU load I believe by default, and it's no problem at all to provide for additional information. This part is trivial since I've already implemented SNMP elsewhere. > You could use wget to fetch metrics from all the servers (include a > timestamp so you know when your data is stale) and have the director > consider this information when it punches down new IPTABLEs rules. SNMP would be faster and more lightweight I believe; wget implies I'd have either an HTTP or FTP service running on each of those machines. Plus, these connections would be subject to TCP timeouts, so if one of the machines is down, my metric-gathering script would take forever timing out on it. SNMP fails a lot faster. Also, there'd be quite a bit less parsing to do of the results. Thanks for your response! Drew. -- Drew Leske :: Systems Group/Unix, Computing Services, University of Victoria dleske@xxxxxxx / +1250 472 5055 (office) / +1250 588 4311 (cel)