Hi, Was busy on some other stuffs. On Fri, Jan 16, 2009 at 5:16 AM, Rajagopal Swaminathan <raju.rajsand@xxxxxxxxx> wrote: > Greetings, > > On Thu, Jan 15, 2009 at 1:18 AM, Paras pradhan <pradhanparas@xxxxxxxxx> wrote: >>> On Fri, Jan 9, 2009 at 12:09 AM, Paras pradhan <pradhanparas@xxxxxxxxx> wrote: >>>> >>>> >>>> In an act to solve my fencing issue in my 2 node cluster, i tried to >>>> run fence_ipmi to check if fencing is working or not. I need to know >>>> what is my problem >>>> >>>> - >>>> [root@ha1lx ~]# fence_ipmilan -a 10.42.21.28 -o off -l admin -p admin >> Yes as you said, I am able to power down node4 using node3, so it >> seems ipmi is working fine. But I dunno what is going on with my two >> node cluster. Can a red hat cluster operates fine in a two nodes mode? > > Yes. I have configured few clusters on RHEL 4 and 5. They do work. > >> Do i need qdisk or it is optional. Which area do i need to focus to >> run my 2 nodes red hat cluster using ipmi as fencing device. >> > But I have done it on HP, SUN and IBM servers. All of them have their > own technology like HP-ILO, SUN-ALOm etc. > > I never had a chance on an IPMI. > > BTW, This is a wild guess. I am just curious: > >> <clusternode name="10.42.21.27" nodeid="2" votes="1"> > > Why nodeid here is 2 > >> <method name="1"> >> <device name="fence1"/> >> </method> >> >> <fencedevice agent="fence_ipmilan" ipaddr="10.42.21.28" login="admin" >> name="fence1" passwd="admin"/> > > and > >> <clusternode name="10.42.21.29" nodeid="1" votes="1"> > > here it is 1 Changing node ids did not solve my problem. > > >> <method name="1"> >> <device name="fence2"/> >> </method> > >> <fencedevice agent="fence_ipmilan" ipaddr="10.42.21.30" login="admin" name="fence2" passwd="admin"/> > > <All the disclaimers ever invented apply> > HAve you tried exhanging the numbers? say the one with IP .27 to 1 and .29 to 2. > </All the disclaimers ever invented apply> > > No warranties offered. Just a friendly suggestion.... > Never try it on Production cluster. > > Also we all will get a clearer picture if you use seperate switches > for heartbeat and data networks. > > HTH > > With warm regards > > Rajagopal > > -- > Linux-cluster mailing list > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster > I will try qdisk in my 2 node cluster and post here how it goes. Thanks Paras. -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster