Hello all,
I've been supporting a customer that runs Red Hat Cluster Suite v3.
On this opportunity, we have fully up2date all packages, including those
from RHEL and RHCS. So we have the latest kernel with latest clumanasger
packages.
I've noticed that the "link monitoring" feature is perhaps not working
quite well. The servers have eth0 and eth1 bonded to bond0, and
heartbeat is conducted through eth2.
We've tried to pull out both cables from eth0 and eth1, thus rendering
the machine unaccessible from the corporate network, but still
accessible from the other machine via the crossover cable on eth2.
Since we have the "monitor link" check box "on" for the virtual IP, I
imagined that the clumanager would notice the link absense and thus
migrate the service to the other node.
Unfortunately, the logs showed nothing and the system went running as
they were before unpluging the cables.
Is this behaviour correct? Shouldn't clumanager notice that the cables
went down?
I appreciate any tips on this. BTW, the relevant part of cluster.xml is
here:
<service checkinterval="0" failoverdomain="srvkrm" id="0"
maxfalsestarts="5" maxrestarts="10" name="srvkrm"
userscript="/root/bin/Start/StartDBKRM">
<service_ipaddresses>
<service_ipaddress broadcast="10.0.7.255" id="0"
ipaddress="10.0.4.184" monitor_link="1" netmask="255.255.252.0"/>
</service_ipaddresses>
<device id="0" name="/dev/emcpoweri1" sharename="">
<mount forceunmount="yes" fstype="ext3" mountpoint="/ext_krm"
options=""/>
</device>
</service>
Thank you in advance for any help.
Regards,
Celso.
--
*Celso Kopp Webber*
celso@xxxxxxxxxxxxxxxx <mailto:celso@xxxxxxxxxxxxxxxx>
*Webbertek - Opensource Knowledge*
(41) 8813-1919
(41) 3284-3035
--
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster