Re: DR node in a cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Wed, 2011-07-06 at 11:15 -0500, Paras pradhan wrote:
> Hi,
> 
> 
> My GFS2 linux cluster has three nodes. Two at the data center and one
> at the DR site. If the nodes at DR site break/turnoff, all the
> services move to DR node. But if the 2 nodes at the data center lost
> communication with the DR node, I am not sure how does the cluster
> handles the split brain. So I am looking for some recommendation in
> this kind of scenario. I am usig Qdisk votes (=3) in this case.
> 
> 
Using GFS2 in stretched clusters like this is not something that we
support or recommend. It might work in some circumstances, but it is
very complicated to ensure that recovery will work correctly in all
cases. If you don't have enough nodes at a site to allow quorum to be
established, then when communication fails between sites you must fence
those nodes or risk data corruption when communication is
re-established,

Steve.

> --
> Here is the cman_tool status output.
> 
> 
> 
> 
> -
> Version: 6.2.0
> Config Version: 74
> Cluster Name: vrprd
> Cluster Id: 3304
> Cluster Member: Yes
> Cluster Generation: 1720
> Membership state: Cluster-Member
> Nodes: 3
> Expected votes: 6
> Quorum device votes: 3
> Total votes: 6
> Quorum: 4  
> Active subsystems: 10
> Flags: Dirty 
> Ports Bound: 0 11 177  
> Node name: vrprd1.hostmy.com
> Node ID: 2
> Multicast addresses: x.x.x.244 
> Node addresses: x.x.x.96 
> --
> 
> 
> Thanks!
> Paras.
> 
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux