Greetings, The Red Hat Cluster Suite page says the following: "For applications that require maximum uptime, a Red Hat Enterprise Linux cluster with Red Hat Cluster Suite is the answer. Specifically designed for Red Hat Enterprise Linux, Red Hat Cluster Suite provides two distinct types of clustering: * Application/Service Failover - Create n-node server clusters for failover of key applications and services * IP Load Balancing - Load balance incoming IP network requests across a farm of servers" The implication seems to be that the first type addresses high availability, and the second scalability. What is the optimal way to get both? Please understand that I am already a user of GFS and LVS. I'm asking the question because the two seemingly have nothing in common. For example, cman knows about cluster membership and can immediately react when a node leaves the cluster or is fenced. On the other hand, LVS (together with either piranha or ldirectord) keeps a list of real servers, periodically checking each and removing any found to be unresponsive. It seems like there are a couple drawbacks to this bifurcated design: - once cman realizes a node has left the cluster, there is a delay before ipvs updates its configuration, during which user requests can be routed to a dead server - two distinct sets of cluster configurations have to be maintained Am I misunderstanding something fundamental, or is that the way it is? -Jeff -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster