John R Pierce wrote:
I've setup the CentOS version of the RH Cluster Service on a pair of
centos 4.4 i386 test boxes... wanting to do high availabilty stuff.
its working, I've added a virtual IP and a fiberchannel hosted e3fs
file system thhat either can mount, if I manually crash or reboot one,
this IP and FS mount on the other, awesome.
the servers are connected to the storage via a QLogic SANbox 5600
fiberswitch, and I've added a SANbox2 'fence device' to the cluster,
with its IP and login/password... but I don't understand how to
configure the SANbox or the fencing agent so that it knows how to map
these two servers to the zones or zonesets on the SANbox.
right now everything on the SANbox is in the same zone (these two plus
a few other servers plus the shared storage controller which also has
some LUNs that are only accessible by a couple Sun servers running
Solaris).
what am I missing? I did see the man page on fabric_sanbox2, but
dont see how that applies via system-config-cluster ...
ok, a clean start this morning, and I found the fence configuration
under each cluster node. for the qlogic sanbox2 fencing driver, I'm
not sure if I'm suppose to put in the integer port number ('13'), the
port name ('port13'), or the porrt address ('010d00') or what? Tried
the first and last of those, and it still doesn't seem to be fencing :-/q
ok, fence_sanbox2 -a svfis-sanbox -l XXXXX -p XXXXX -n 14 -o disable
seems to work. so I guess its the port number, like 13 or 14....
maybe I just need to reboot everything to get the fencing settings
working, hmmm.
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos