Hi, On Mon, 2011-12-12 at 04:37 +0000, Jankowski, Chris wrote: > Yu, > > > > GFS2 or any other filesystems being replicated are not aware at all of > the block replication taking place in the storage layer. This is > entirely transparent to the OS and filesystems clustered or not. > Replication happens entirely in the array/SAN layer and the servers > are not involved at all. > That is true if the cluster is contained within the same physical location. If, for example, the plan was a split site implementation, then the inter-site interconnect would be added into the equation too. I'm not sure which case is being proposed in this particular case. > > > So, there is nothing for Red Hat to support or not support – they just > do not see it. Nor do they have any ability to see it even if they > wanted. Very often the array ports for replication are on separate > ports and in separate FC zones. > It is probably just a case of not supporting what we do not test. I'm wondering what the use case would be if both arrays were on the same site mirroring the same filesystem? If they are on different sites, then knowing which end is active becomes a problem. If both ends become active, even for a short time then there is no way to merge the two filesytems together again later on. > > > Storage replication may have some performance impact, but this just > looks like slower disks. GFS2 does not have any specific numerical > requirements for IO rate, bandwidth and latency. > True to a certain extent, but if things get too slow then obviously it is not going to meet a reasonable expectation of performance. The disk latency has a big effect on how quickly cached data can be migrated between nodes. Also, a guarantee of reasonable network bandwidth and latency is a requirement for corosync and thus all the services running over it, such as fencing. So there are some issues which need to be addressed in order to ensure that everything works as intended, Steve. > > > Could you quote the Red Hat KB – what exactly does it say and in what > context? > > > > Regards, > > > > Chris Jankowski > > > > > > > > From: linux-cluster-bounces@xxxxxxxxxx > [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of yu song > Sent: Monday, 12 December 2011 14:30 > To: linux clustering > Subject: GFS2 support EMC storage SRDF?? > > > > > Hi GFS2 gurus, > > > > > > I am planning to setup 2 x 2 nodes clusters in our environment with > using emc storage to build cluster shared filesytems (GFS2). > > > > > > PROD: 2 nodes ( cluster 1) > > > > > > DR: 2 nodes ( cluster 2). > > > > > > as below shows: > > > > > > PROD > > > Cluster 1 share LUNs for PROD > (node1, node2) > > > · 1 x 100G = Tier 1 (R1) > > > · 1 x 200G = Tier 2 (R1) > > > · 1 x 200G = Tier 3 (R1) > > > > > > DR > > > Cluster 2 share LUNs for DR (node > 1,node 2) > > > · 1 x 100G = Tier 1 (R2) > > > · 1 x 200G = Tier 2 (R2) > > > · 1 x 200G = Tier 3 (R2) > > > > > > > > > > > > My question is that GFS2 supports SRDF ?? looking at KB in redhat > site, it only says that GFS2 does not support using asynchronous or > active/passive array based replication. but it seems like does not > apply for SRDF. > > > > > > if anyone has done this before, appreciate you can give some ideas. > > > > > > cheers! > > > > > > Yu > > > > > > > > > -- > Linux-cluster mailing list > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster