Re: rhel6 node start causes power on of the other one

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have the same situation (two_node=1, RHEL5.5, no quorum disk), but it works nice for me. Having both nodes down, starting one node always sucessfully fence the other and this is the expected as Fabio said.

In my scenario the fenced node must remain down, even when sucessfully fenced by the remaining node, as long I did shutdown -h on it previously, I halted it. So the fencing (APC power switch) runs ok, it flips off then on the fenced node's outlets, but in this scenario the server bootup is not triggered until a operator press the server's power-on/off button.

How it reacts to a power fencing action, it depends sometimes how you configure the server BIOS setting, the Automatic Server Restart options at BIOS level for example.

alvaro

On 3/22/2011 11:12 AM, Gianluca Cecchi wrote:
> Hello,
> I'm using latest updates on a 2 nodes rhel 6 based cluster.
> At the moment no quorum disk defined, so this line inside cluster.conf
> <cman expected_votes="1" two_node="1"/>
> 
> # rpm -q cman rgmanager fence-agents ricci corosync
> cman-3.0.12-23.el6_0.6.x86_64
> rgmanager-3.0.12-10.el6.x86_64
> fence-agents-3.0.12-8.el6_0.3.x86_64
> ricci-0.16.2-13.el6.x86_64
> corosync-1.2.3-21.el6_0.1.x86_64
> 
> # uname -r
> 2.6.32-71.18.2.el6.x86_64

For RHEL related questions you should always file a ticket with GSS.

> 
> If the initial situation is both nodes down and I start one of them, I
> get it powering on the other, that is not my intentional target...
> Is this an expected default behaviour in rh el 6 with two nodes
> without quorum disk? Or in general no matter if a quorum disk is
> defined?
> If so, how to change it if possible?

This is expected behavior.

The node that is booting/powering on, will gain quorum by itself and
since it does not detect the other node for N amount of seconds, it will
perform a fencing action to make sure the node is not accessing any
shared resource.

I am not sure why you want a one node cluster, but one easy workaround
is to start both of them at the same time, and then shutdown one of
them. At that point they have both seen each other and the one going
down will tell the other "I am going offline, no worries, it´s all good".

Fabio

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux