RE: Clumanager and Chkconfig

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It's basically a policy issue on your part. Some folks like to have problem
nodes
boot up "dumb" to avoid the system taking a beating due to a major problem.
It's
possible that the cluster would ride this sort of thing out, but if you have
a node
go down, you'd be investigating anyway so booting "dumb" is not a bad idea
anyway.


Corey 

-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Steve Nelson
Sent: Tuesday, April 18, 2006 9:11 AM
To: linux clustering
Subject:  Clumanager and Chkconfig

Hi All,

Should clumanager be set to automatically start on all nodes?  I have a 2
node cluster (+ quorum) were if I kill an interface, the cluster fails over
and the failed node reboots.  However, the node rejoins the cluster
automatically - should this happen?

# chkconfig --list clumanager
clumanager      0:off   1:off   2:on    3:on    4:on    5:on    6:off

This is in chkconfig because I ran chkconfig --add clumanager.

On another cluster, I have not run this, but this is currently in production
so I can't test failover.

My feeling was that Oracle should transfer to the other node, and clustat
should shown one node is inactive, and should be started manually.

Does this seem right?

S.

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux