We went with 2 APC remote power switches. The ports can be grouped
together so you can turn off multiple ports at the same time to combat
redundant power supplies keeping the machine on. Just remember to have
the cluster use the same IP address to connect to the APC device for
each node so they can't both be fenced simultaneously. The APC software
only allows one session at a time so the second node will be blocked
from accessing it on the same IP.
Hope that helps.
Chris
Josh Gray wrote:
Sorry I'm quite the question asker on the list this week, trying to digest
the cluster docs pretty quick!
One more Q - Reading the FAQ section on power fencing. What is the best
design for a two (or three) node cluster with dual power supplies? I would
assume best with regards to redundancy would be two power switches - one for
each power supply. But in the case of fencing shoot outs, should they all
be on the same single switch? Am I just over thinking this?
" same network path as the path used by CMAN for cluster communication"
Does that mean literally the same ethernet switch or logical vlan, or what?
# What is the best two-node network & fencing configuration?
In a two node cluster (where you are using two_node="1" in the cluster
configuration, and w/o QDisk), there are several considerations you need to
be aware of:
* If you are using per-node power management of any sort where the
device is not shared between cluster nodes, you MUST have all fence devices
on the same network path as the path used by CMAN for cluster communication.
Failure to do so can result in both nodes simultaneously fencing each other,
leaving the entire cluster dead, or end up in a fence loop. Typically, this
includes all integrated power management solutions (iLO, IPMI, RSA, ERA, IBM
Blade Center, Egenera Blade Frame, Dell DRAC, etc.), but also includes
remote power switches (APC, WTI) if the devices are not shared between the
two nodes.
* It is best to use power-type fencing. SAN or SCSI-reservation fencing
might work, as long as it meets the above requirements. If it does not, you
should consider using a quorum disk or partition
If you can not meet the above requirements, you can use quorum disk or
partition.
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster