Re: Why Redhat replace quorum partition/lock lun with new fencing mechanisms?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 6/16/06, Kevin Anderson <kanderso@xxxxxxxxxx> wrote:
On Fri, 2006-06-16 at 00:30 +0800, jOe wrote:


>
> Thank you very much Kevin, your information is very useful to us and
> i've shared it to our engineer team.
> Here are two questions still left:
> Q1: In a two node cluster config, how does RHCS(v4) handle the
> heartbeat failed ? (suppose the bonded heartbeat path still failed by
> some bad situations).

Current configuration requires using power fencing when running the
special case two node cluster.  If you lose heartbeat between the two
machines, both nodes will attempt to fence the other node.  The node
that wins the fencing race gets to stay up, the other node is reset and
won't be able to re-establish quorum until connectivity is restored.

> When using quorum disk/lock lun, the quorum will act as a tier breaker
> and solve the brain-split if heartbeat failed. Currently the GFS will
> do this ? or other part of RHCS?

Quorum support is integrated in the core cluster infrastructure so is
usable with just RHCS.  You do not need GFS to use a quorum disk.

>
> Q2: As you mentioned the quorum disk support is added into  RHCS v4.4
> update release, so in a two-nodes-cluster config "quorum disk+bonding
> heartbeat+fencing(powerswitch or iLO/DRAC) (no GFS)" is the
> recommended config from RedHat? Almost 80% cluster requests from our
> customers are around two-nodes-cluster(10% is RAC and the left is hpc
> cluster), We really want to provide our customers a simple and solid
> cluster config in their production environment, Most customer
> configure their HA cluster as Active/passive so GFS is not necessary
> to them and they even don't want GFS exists in their two-nodes-cluster
> system.

If you have access to shared storage, then a two node cluster with
quorum disk/fencing would be a better configuration and could be the
recommended configuration.  However, there are still cases where you
could have a two node cluster with no shared storage.  Depends on how
the application is sharing state or accessing data.  But for an
active/passive two node failover cluster, I can see where the quorum
disk will be very popular.

Kevin

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



Thank you very much.

Jun
--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux