Dear all,
We have set up a 3 + 1 cluster which is 3 active node and 1 standby nodes and quorum disks.
clustat
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
servera 1 Online, rgmanager
serverb 2 Online, rgmanager
serverc 3 Online, rgmanager
standby 4 Online, Local, rgmanager
/dev/emcpowers 0 Online, Quorum Disk
------ ---- ---- ------
servera 1 Online, rgmanager
serverb 2 Online, rgmanager
serverc 3 Online, rgmanager
standby 4 Online, Local, rgmanager
/dev/emcpowers 0 Online, Quorum Disk
Service Name Owner (Last) State
service:servicea servera started
service:serviceb serverb started
service:servicec serverc started
service:servicea servera started
service:serviceb serverb started
service:servicec serverc started
Any server failure and cause server relocate to the standby server and basically all cluster functions properly.
However, when I type clusvcadm -Z servera, it can sucessfully freeze the nodes. However, if I type clusvcadm -U servera to unfreeze the node, it will check the status of the running application under cluster monitoring. But don't know why it return status failed while the application is running properly. It will then try to stop the application and reported that it failed to unmount the partition and cause servera rebooted. During servera reboot, servicea can not failover to standby node and the service state shows "recoverable". After servera rebooted successfully, servicea can run on servera but then serverb and serverc reboot togeter.
Do you have any idea?
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster