Re: DR node in a cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Paras,

 

With your SAN on one site, what is the point of having a stretched cluster?

If your datacenter, where the SAN is located, burns down, you’ve lost all your data.

The DR servers in the DR datacenter are kind of useless without the data on shared storage.

 

Regards,

 

Chris

 

 

From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Paras pradhan
Sent: Thursday, 7 July 2011 03:17
To: linux clustering
Subject: Re: DR node in a cluster

 

Chris,

 

All the nodes are connected to a single SAN at this moment through fibre.

 

 

@steven:

 

--

 If you don't have enough nodes at a site to allow quorum to be
established, then when communication fails between sites you must fence
those nodes or risk data corruption when communication is
re-established,
-----

 

Yes true, but in this case a single node can made the cluster quorate. (qdisk vote=3 ,node votes=3, total=6) which is not recommened I guess (?).

 


Steve

On Wed, Jul 6, 2011 at 11:46 AM, Jankowski, Chris <Chris.Jankowski@xxxxxx> wrote:

Paras,

 

A curiosity question:

 

How do you make sure that your storage will survive failure of *either* of your site without loss of data and continuity of service?

What storage configuration are you using?

 

Thanks and regards,


Chris

 

From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Paras pradhan
Sent: Thursday, 7 July 2011 02:15
To: linux clustering
Subject: DR node in a cluster

 

Hi,

 

My GFS2 linux cluster has three nodes. Two at the data center and one at the DR site. If the nodes at DR site break/turnoff, all the services move to DR node. But if the 2 nodes at the data center lost communication with the DR node, I am not sure how does the cluster handles the split brain. So I am looking for some recommendation in this kind of scenario. I am usig Qdisk votes (=3) in this case.

 

--

Here is the cman_tool status output.

 

 

-

Version: 6.2.0

Config Version: 74

Cluster Name: vrprd

Cluster Id: 3304

Cluster Member: Yes

Cluster Generation: 1720

Membership state: Cluster-Member

Nodes: 3

Expected votes: 6

Quorum device votes: 3

Total votes: 6

Quorum: 4  

Active subsystems: 10

Flags: Dirty 

Ports Bound: 0 11 177  

Node ID: 2

Multicast addresses: x.x.x.244 

Node addresses: x.x.x.96 

--

 

Thanks!

Paras.

 


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

 

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux