I'm sorry, that was my mistake, the machine does in fact power back up.
Adam Manthei wrote:
On Thu, Aug 04, 2005 at 11:00:47AM +0000, "Sævaldur Arnar Gunnarsson [Hugsmiðjan]" wrote:
Well .. when I manually run the fence_drac.pl perl script and supply it
with the ip of the DRAC (-a 192.168.100.173) login name (-l root)
DRAC/MC module name (-m Server-1) and the password (-p dummypassword)
the machine in question (Server-1) powers down and doesn't power back on.
Interesting... :(
How do I implement this in cluster.xml (specify the ip/login/pass/module
name)
Typically, the parameters are suppose to be in the manpage for the agent.
If they are not, then it should be considered a bug. I think this will work
for, but I've not tested the bellow config, so it might not be error free :)
<fencedevices>
<fencedevice name="dracula"
agent="fence_drac"
login="root"
passwd="dummypassword"
ipaddr="192.168.100.173"
action="reboot" />
</fencedevices>
<clusternodes>
<clusternode name="servername">
<method name="1">
<device name="dracula" module="Server-1"/>
</method>
</clusternode>
</clusternodes>
and shouldn't it power back up afterwards ?
The default action is suppose to be "reboot" as in the machine should come
back online. I don't know why it isn't. If you continue to have problems,
try enabling the debugging output from the command line:
fence_drac -a 192.168.100.173 -l root -p dummypassword -m Server-1 \
-D /tmp/drac.log -v
Keep us posted.
-Adam
JACOB_LIBERMAN@xxxxxxxx wrote:
The 1855 has a built in ERA controller. You can modify the fencing agents
to either send "racadm serveraction powercycle" or install the PERL telent
module and create your own fencing script. The former option requires that
the rac management software be installed on the host. I havent tested this
with the 1855 btw.
http://sources.redhat.com/cgi-bin/cvsweb.cgi/cluster/fence/agents/drac/?cvsroot=cluster
The fence_drac agent out on the CVS should work for you. If you cant get
it working, let me know, and ill see if I can dig up an 1855 in the lab.
Thanks, jacob
-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of
"Sævaldur Arnar Gunnarsson [Hugsmiðjan]"
Sent: Wednesday, August 03, 2005 6:59 AM
To: linux-cluster@xxxxxxxxxx
Subject: Fencing agents
I'm implementing a shared storage between multiple (2 at the
moment) Blade machines (Dell PowerEdge 1855) running RHEL4 ES
connected to a EMC AX100 through FC.
The SAN has two FC ports so the need for a FC Switch has not
yet come however we will add other Blades in the coming months.
The one thing I haven't got figured out with GFS and the
Cluster-Suite is the whole idea about fencing.
We have a working setup using Centos rebuilds of the
Cluster-Suite and GFS (http://rpm.karan.org/el4/csgfs/) which
we are not planning to use in the final implementation where
we plan to use the official GFS packages from Red Hat.
The fencing agents in that setup is manual fencing.
Both machines have the file system mounted and there appears
to be no problems.
What does "automatic" fencing have to offer that the manual
fencing lacks.
If we decide to buy the FC switch right away is it recomended
that we buy one of the ones that have fencing agent available
for the Cluster-Suite ?
If can't get our hands on supported FC switchs can we do
fencing in another manner than throught a FC switch ?
--
Sævaldur Gunnarsson :: Hugsmiðjan
--
Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster
--
Sævaldur Gunnarsson :: Hugsmiðjan
--
Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster
--
Sævaldur Gunnarsson :: Hugsmiðjan
--
Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster