Re: qdisk WITHOUT fencing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>The Linux Cluster seems to be saying "A node is the centre of the world and can control it".

While I won't question your knowledge on the subject, doesn't a quorum
mitigate this to some degree?

As for the Ops original dilemma, if you can design fault tolerance into
your own procedure, you can trivially write your own fence daemon script
like I did for an HP Procurve (the iscsi script in 5.5 didn't work in my
5.4 cluster as a result of newer deps).

You can make use of whatever technologies you want such as iptables, switch
ports etc in your own script and return a "success" to the fenced so things
carry on.

I use drbd between my two node w/o a qdisc and have drbd play a role in mitigating
the issues you describe. My two nodes are also separated by fiber and I
encountered the same issue where one node might be able to fence the other
properly.

jlc



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux