4 node gfs cluster, quorum needs 3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I have at this time 4 node gfs cluster using RLM.
Two nodes (node1, node2) have mounted gfs filesystem and other two (node3, node4) are working as loadballancers and as redundant lock servers (no gfs fs mounted on node3 or node4).
(i am using GFS-6.0.2.20-2, GFS-modules-smp-6.0.2.20-2, kernel-smp-2.4.21-32.0.1.EL)

So when all nodes are up there is:

quorum_has = 4
quorum_needs = 3

I tried to stop lock_gulm on node3 and node4.
Although the cluster was in state

quorum_has = 2
quorum_needs = 3

the gfs filesystem on node1 or node2 still remained read/write accessible. 
Is this behaviour correct ?

----

nodes   quorum_needs	quorum_has	filesystem

3		>=2		2		r/w

4		>=3		2		r/w ?????

5		>=3		3		r/w


Can anybody help me out to correct or even extend the table above?
Where is the truth ? :) or have I misunderstood something ?

Thanks a lot for your answers.

--
Ján Kudják
UNIX/Linux Consultant

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux