Thank you all.The problem I have is that I don't seem to be able to get out of the cluster gracefully, even if I stop the services manually in the right order.For example, I joined the clusterÂmanually by starting cman, clvmd and gfs2 in the order and everything is working just fine.Then I wanted to reboot. This time, I want to do it manually so I went to stop the services in order.[root@test2 ~]# service gfs2 stopUnmounting GFS2 filesystem (/vrstorm):           [ ÂOK Â][root@test2 ~]# service clvmd stopSignaling clvmd to exit                  Â[ ÂOK Â]Waiting for clvmd to exit:                 [FAILED]clvmd failed to exit                    [FAILED]Somehow clvmd cannot be stopped. I still have the process runningroot   Â2646 Â0.0 Â0.5 194476 45016 ?    ÂSLsl 02:18  0:00 clvmd -T30How do I stop clvmd gracefully? I am running RHEL-6.[root@test2 ~]# uname -aLinux test2 2.6.32-71.18.2.el6.x86_64 #1 SMP Wed Mar 2 14:17:40 EST 2011 x86_64 x86_64 x86_64 GNU/Linux[root@test2 ~]# cat /etc/redhat-releaseÂRed Hat Enterprise Linux Server release 6.0 (Santiago)Thank you very much.ShiOn Thu, Mar 10, 2011 at 1:41 PM, Alvaro Jose Fernandez <alvaro.fernandez@xxxxxxxxx> wrote:
--Hi,
Â
Given fencing is properly configured, I think the default boot/sshutdown RHCS scripts should work. I too use two_node (but no clvmd) in RHEL5.5 with latest updates to cman and rgmanager, and a shutdown -r works well (and a shutdown -h too). The other node cluster daemon should log this as a node shutdown in /var/log/messages, and it should adjust quorum, and not trigger a fencing action over the other node.
Â
If one halts and poweroff via shutdown -h one of the two nodes, and then reboots (via shutdown -r) the surviving node, the surviving node will fence the other. We have power switch fencing, and it should simply suceed (making a power off then a power on on the other node's outlets). Once this fencing suceeds, the boot sequence continues and the node assumes quorum.
Â
If later the other node is powered on, it should join the cluster without problems.
Â
alvaro,
Â
Hi there,
Â
I've setup a two-node cluster with cman, clvmd and gfs2. I don't use qdisk but had
<cman expected_votes="1" two_node="1"/>
Â
I would like to know what is the proper procedure to reboot a node in the two-node cluster (maybe this applies for all size?) when both nodes are functioning fine but I just want to reboot one for some reason (for example, upgrade kernel). Is there a preferred/better way to reboot the machine rather than just running the "reboot" command as root. I have been doing the "reboot" command so far and it sometimes creates problems for us, including Âmaking the other node to fail.Â
Â
Thank you very much.
Shi
--
Shi Jin, Ph.D.
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Shi Jin, Ph.D.
--
Shi Jin, Ph.D.
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster