Ben, I'm afraid you're completely missing the distinction between internal cluster communications (the "interface" definitions in corosync.conf), and the clients' communications with networked cluster resources. On Mon, Feb 6, 2012 at 5:34 PM, Ben Shepherd <bshepherd@xxxxxxxxx> wrote: > Basically traffic of both types comes in from BOTH networks. > We send the traffic to the VIP's on each network. > These VIPS will be held by the Active server. > > Traffic will go to Server 1 on both Network1 and Network2. When you say Network1 and Network2, does that mean two network interfaces connected to two distinct subnets? > If we lose either the interface to Network1 or the interface to Network2 > we need to fail over the VIP's to the other server. That's what connectivity monitoring is for, which is a cluster service. Corosync doesn't concern itself with that; Pacemaker will manage it. The ocf:pacemaker:ping resource agent was designed for that purpose. > We cannot keep the VIP on the active server if 1 of the networks is not > working as an entire service will go down. > > Yes I would prefer a single ring with 2 interfaces...that fails over if > either interfaces reports a problem. No you don't; you always want your cluster to communicate over as many rings as possible. You want your cluster resource manager to fail over if there is a problem on the upstream network. I hope this helps. Try to think of cluster communications and cluster resource management as two distinct layers in the stack. Cheers, Florian -- Need help with High Availability? http://www.hastexo.com/now -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster