Re: Re: Fencing question in geo cluster (dual sites clustering)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Alfredo,
 
As you stated, as all the fencing methods will fail, the last resort will be manual fencing, but it is not supported.
 
Another way to do so would have been the suicide, but it is what qdisk does when communications with the other nodes and qdisk are lost, but that happens after a short while (a few tens of seconds).
 
There is one point I do not completely agree with you in 2-I
 
B doesn't realize it is not quorate after the 1 minute timeout, it just gives up after this timeout (tko * interval) .
Depending on the  qdisk interval (1s in my case), B already knows it is not quorate anymore after the first qdisk miss occurs (the interval value).
 
The concept of having B disk being updated during the short  time before tko * interval timeout , then A taking over B services on one mirror leg (not up 2 date) at A and the B modifications are just discarded at resynch kind of disturbs me.
 
 Let me detail this:
 
1) node B  hosts a database S made of mirror D for data and mirror L for logs
 
2) isolation occurs
 
3) B continues (or at least is flushing) to write to D and L which are both half mirrors now (Db and Lb)
 
4) regions/sectors Bd and Bi on Db and Lb are upated
 
5) qdisk timeout after interval * tko occurs, node is reset
 
6) fencing (manual one) succeds (admin ack)
 
7)  A  takes over S with D and L being both half mirrors (Da and La) (Da != Db, La != Lb )
 
8) regions/sectors Ad and Ai on Da and La are updated 
 
9) outage is over, B is back
 
10) depending on the mirroring solution, couldn't we end up having  a partial resynch from B to A concerning Lb and Db and another partial resynch from A to B concerning La and Da ?
 
Array based replication wouldn't be this way as there can be only one replication direction at a time (at leat on the arrays I know), but soft mirroring (mdadm, lvm mirror etc...) may have such multidirectionnal resynch capabilites, no ?
 
Brem 
 
 
  
 
 
 

 
2009/9/11, Moralejo, Alfredo <alfredo.moralejo@xxxxxxxxx>:

Hi,

 

From my point of view the problem is not so related to what does the “bad node” is down but what happen when communications are restored. Let me explain it.

 

1. Let’s start in a 2 nodes clean cluster, each in a different site. Data duplication is done from the host using md or lvm mirror. There is a service running on each node. Qdisk or third node in a third site for quorum.

 

2. Communications are lost in site B (where node B runs.). What happens?, not sure, but as my understanding:

 

           I- Node B will continue working for some time until it realizes it’s not quorated (depending on timeouts, let’s say 1 minute). Data writes on this time are only written to Disks on site B, modifications not written to disks in site A.

            II- Finally, Node B detects it lost qdisk and detects it’s inquorate and rgmanager stops all services running in node B.

            III- In node A, time some time to detect A is dead and never will become inquorate. Services running in node A will continue working, but writes will only be done to disks in disks in site A. Mirror is lost.

            IV- Finally, Node A detects node B is dead and will try to fence it (probably it will need to use manual fence for confirmation).

            V- Until fence is successful, services running originally in node B will not be transferred to node A, so service will be never running simultaneously on both nodes.

            VI- After fence is successful, service starts in node A using disks in site A, without any modification done since the outage until failure is deteced by node B (from I to II). Data modification done from node A are only done to these disks.

 

3. Communications are restored in site B. At this time node B will join the cluster again. Acces to disks in site B is recovered by node A. At this time mirror should be synchronized from disks in site A to site B always, so that, we have a coherent view of the data in both disks, and changes done from node B in the qdisk timeout (from I to II) will be definitively lost.

 

I think this the expected behavior for a multisite cluster in this scenario.

 

Best regards,

 

Alfredo

 

 

 


From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of brem belguebli
Sent: Thursday,
September 10, 2009 11:23 PM
To: linux clustering
Subject: Re: Fencing question in geo cluster (dual sites clustering)

 

Hi,

 

No comments on this RHCS gurus ? Am I trying to setup (multisite cluster) something that 'll never be supported ?

 

Or is the qdiskd reboot action considered as sufficient?  (Reboot action should be a dirty power reset to prevent data syncing) 

 

If so, all IO's on the wrong nodes (at the isolated site) should be frozen untill quorum is eventually regained. If not it'll end up with a (dirty) reboot.

 

Brem 

 

2009/8/21 brem belguebli <brem.belguebli@xxxxxxxxx>

Hi,

 

I'm trying to find out what best fencing solution could fit a dual sites cluster.

 

Cluster is equally sized on each site (2 nodes/site), each site hosting a SAN array so that each node from any site can see the 2 arrays.

 

Quorum  disk (iscsi LUN) is hosted on a 3rd site.

 

SAN and LAN using the same telco infrastructure (2 redundant DWDM loops). 

 

In case something happens at Telco level (both DWDM loops are broken) that makes 1 of the 2 sites completely isolated from the rest of the world,

the nodes at the good site (the one still operationnal) won't be able to fence any node from the wrong site (the one that is isolated) as there is no way for them to reach their ILO's or do any SAN fencing as the switches at the wrong site are no more reachable.

 

As qdiskd is not reachable from the wrong nodes, they end up being rebooted by  qdisk, but there is a short time (a few seconds) during which the wrong nodes are still seing their local SAN array storage and may potentially have written data on it.

 

Any ideas or comments on how to ensure data integrity in such setup ?

 

Regards

 

Brem

 


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux