Re: rhel6 node start causes power on of the other one

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 3/22/2011 11:12 AM, Gianluca Cecchi wrote:
> Hello,
> I'm using latest updates on a 2 nodes rhel 6 based cluster.
> At the moment no quorum disk defined, so this line inside cluster.conf
> <cman expected_votes="1" two_node="1"/>
> 
> # rpm -q cman rgmanager fence-agents ricci corosync
> cman-3.0.12-23.el6_0.6.x86_64
> rgmanager-3.0.12-10.el6.x86_64
> fence-agents-3.0.12-8.el6_0.3.x86_64
> ricci-0.16.2-13.el6.x86_64
> corosync-1.2.3-21.el6_0.1.x86_64
> 
> # uname -r
> 2.6.32-71.18.2.el6.x86_64

For RHEL related questions you should always file a ticket with GSS.

> 
> If the initial situation is both nodes down and I start one of them, I
> get it powering on the other, that is not my intentional target...
> Is this an expected default behaviour in rh el 6 with two nodes
> without quorum disk? Or in general no matter if a quorum disk is
> defined?
> If so, how to change it if possible?

This is expected behavior.

The node that is booting/powering on, will gain quorum by itself and
since it does not detect the other node for N amount of seconds, it will
perform a fencing action to make sure the node is not accessing any
shared resource.

I am not sure why you want a one node cluster, but one easy workaround
is to start both of them at the same time, and then shutdown one of
them. At that point they have both seen each other and the one going
down will tell the other "I am going offline, no worries, it´s all good".

Fabio

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux