Tom,
We fence with our own fencing system that integrates with our virtualization infrastructure, so it probably doesn't apply to your situation. It involves daemons, multicast, and SSL-encrypted RPC calls.
As for qdisk, I'm probably not the best to answer this because we decided we don't need them in our production setup. It add votes into the cluster when you have access to the storage. Reasons for doing this vary, so perhaps a bit of Googling is in order...
I'm afraid I don't know much about the GUI, because we rolled our own system on Gentoo. Hi Jayson,
I just plugged the hosts directly into the Coraid for troubleshooting purposes. This is how Coraid setup the machines in their published benchmarks, so I figured it would be safe. I actually have two Asante IC36524 switches dedicated for Coraid storage with the intention of having redundant paths. My dual-port PCIe Ethernet cards didn't arrive until yesterday, so I only had a single port on each host to connect to each switch. This isolated one of the hosts to the bad port on the SR1520. I should have found this sooner.
On a separate note, I have modified the fence_vixel script to perform fencing on the Asante switches by shutting down the appropriate switch ports. These Asante switches use what appears to be a cloned Cisco IOS interface, so this script should work for any Ethernet switch that also has the IOS style telnet interface or will at least get you close. It works on the command-line, but I haven't actually tested it in the cluster through a real fence operation. I'd be happy to share it if it would be helpful.
How are you fencing your cluster nodes? I specify the Vixel fence in the configuration GUI since I can't find a way to easily add a custom fence agent.
What is the benefit of using qdisk?
Thanks, Tom
-- Jayson Vantuyl Systems Architect Engine Yard |
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster