Re: 4 node cluster even split

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don't know if this could help you, but let me share our configuration as an example.  I have overridden the default votes on 2 nodes that are vital to application implementation of an 11-node cluster overall.  If I cold start the cluster, those 2 nodes running alone are enough to quorate, because they each carry 5 votes each for a total of 10 votes.  The remaining 9 nodes carry the default 1 vote each, so the cluster expected votes are 19.

10 = (19 / 2) + 1

If I lose 1 of those 2 network director nodes, you lose 5 votes but you remain quorate, unless you lose 5 more regular nodes along with it.  If I lose BOTH network director nodes (10 votes), I don't care about quorum, because my application is dead anyways (no network directors managing client connections!).  But, we have a contingency plan to "promote" one of the failover nodes as a network director by running the correct services and adjusting its vote count to 5 for extra redundancy.

It would be nice to see other implementations that vary from the typical 1 vote per node cluster.


On Tue, 2007-04-10 at 12:43 +0100, Patrick Caulfield wrote:
Janne Peltonen wrote:
> Hi!
> 
> I've been wondering...
> 
> If I were to build a 4 node cluster, what would happen (with default
> quorum settings) if I lost two nodes at once (say, by losing one of my
> two blade racks)?  Would the remaining two nodes be able to continue, or
> would they consider quorum dissolved and shut down the remaining
> services? I only have three nodes for testing purposes, so I haven't
> been able to look at this yet.

Under normal circumstances you need (n/2)+1 nodes to keep quorum. So if you lose
two nodes out of four then the services will stop. To prevent this you can use
the qdisk program to keep the cluster quorate in such circumstances.

Robert Hurst, Sr. Caché Administrator
Beth Israel Deaconess Medical Center
1135 Tremont Street, REN-7
Boston, Massachusetts   02120-2140
617-754-8754 ∙ Fax: 617-754-8730 ∙ Cell: 401-787-3154
Any technology distinguishable from magic is insufficiently advanced.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux