Strange 'lock out all'

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

A strange situation happened here in our five-node GFS cluster. Two of the 
nodes (lxserv1-gfs and web1-gfs) were fenced off due to an administrator 
error. We still should have been able to run smoothly but two nodes got 
disallowed(?!) and so only one vote remained. From the last standing node:

root@saturn:~# cman_tool status
Version: 6.0.1
Config Version: 6
Cluster Name: kfki
Cluster Id: 1583
Cluster Member: Yes
Cluster Generation: 2332
Membership state: Cluster-Member
Nodes: 4
Expected votes: 5
Total votes: 1
Quorum: 3  
Active subsystems: 7
Flags: DisallowedNodes 
Ports Bound: 0 11  
Node name: saturn-gfs
Node ID: 5
Multicast addresses: 224.0.0.3 
Node addresses: 192.168.192.18 
Disallowed nodes: lxserv0-gfs web0-gfs 

Why and how does a node get disallowed? How could we prevent it to happen 
in the future?

Best regards,
Jozsef
--
E-mail : kadlec@xxxxxxxxxxxx, kadlec@xxxxxxxxxxxxxxxxx
PGP key: http://www.kfki.hu/~kadlec/pgp_public_key.txt
Address: KFKI Research Institute for Particle and Nuclear Physics
         H-1525 Budapest 114, POB. 49, Hungary

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux