Re: rhel6 node start causes power on of the other one

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 22 Mar 2011 11:02:09 -0500 Robert Hayden wrote:
> I believe you will want to investigate the "clean_start" property in the fence_daemon stanza (RHEL 5).
> Unsure if it is in RHEL6/Cluster3 code.  It is my understanding that the property can be used to
> by-pass the timeout and remote fencing on initial startup.  This assumes you know that the remote
> node that is down was shutdown down cleanly and is not part of a cluster.

Aha... that was the parameter I was missing!
At first stages of my approach with rh el 5 cluster I began to use
something like
<fence_daemon clean_start="1" post_fail_delay="0" post_join_delay="20"/>

and always using it I forgot that clean_start is not the default...
And in my rh el6 cluster I didn't put that sort of line.
After doing this I get the behaviour I was having in rh el 5 again,
without node powering on... so it works also in RH EL 6.

On Tue, Mar 22, 2011 at 4:23 PM, Digimer wrote:
[snip]
>
> To avoid this behaviour, change the fence action to 'poweroff'. Of
> course, this means that a failed node will never auto-recover.
>
> Also, the time it takes for the cluster to give up waiting for the other
> node is defaulted to 6 seconds. You can control this with the
> <fencesdaemon post_join_delay="x" />. I personally prefer setting this
> to 60 seconds, to give plenty of time to start both nodes. The value you
> choose should best suit your needs.

I remember other clustering sw such as serviceguard: when a node
booted alone (in the sense that it didn't see any other node alive)
and predefined timeout expired, it stopped at console prompt asking if
you were sure to want to proceed, so not to potentially corrupt
data...
I think such a behaviour would be better than automatic fencing in
default configuration.
Again, also this approach could lead to problems such as need for
direct access to server console or cluster without any node running at
all without manual intervention ...


Thanks to all in the mean time and to Jeremy for his help (I updated
the case too)
Gianluca

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux