Re: GNBD Configuration Question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 10, 2006 at 01:03:36PM -0600, Curt Moore wrote:
> Hello all.
> 
> I've been experimenting with RH Cluster Suite and GFS and have come
> upon a few questions which I hope the list will be able to help. 
> Kudos to all of the developers, RHCS and GFS are really cool!
> 
> To my question, I'm trying to setup a storage network with GFS and
> GNBD using a 3 layered approach as shown in Figure 5 in the following
> link:
> 
> http://www.redhat.com/magazine/008jun05/features/gfs/#fig=multipath
> 
> and also here:
> 
> http://www.redhat.com/docs/manuals/csgfs/browse/rh-gfs-en/s1-ov-perform.html#S2-OV-MODPRICE
> 
> Obviously, the intent is to eliminate any SPOF for the storage network.
> 
> For the sake of example, let's say that I have 2 GNBD servers
> connected directly to the SAN, snode001 and snode002.
> 
> If I export the the same SAN block device from these 2 GNBD servers
> for load sharing purposes and snode001 fails, how do the GNBD clients
> importing that block device from snode001 know that they can also find
> that block device on snode002?  Is this somehow handled at a lower
> level by configuring a resource within the RH Cluster Suite?
> 
> I've scoured the list archives and found the following example, using
> multipath, which seems to come very close:
> 
> http://www.redhat.com/archives/linux-cluster/2005-April/msg00065.html
> 
> However, the documentation states that multipath GNBD cannot be used
> with GFS 6.1:
> http://www.redhat.com/docs/manuals/csgfs/browse/rh-gfs-en/ch-gnbd.html
> 
> Is there another way of accomplishing this without using multipath or
> am I misunderstanding the concept of how multipath is utilized in this
> setup?

The only way for a gnbd client to access the same data served up by two gnbd
servers is to use some multipath implementation. Currently there is no
mulipath implementation that supports gnbd devicse. sorry.

-Ben

 
> Any feedback would be appreciated.
> 
> Thanks!
> -Curt
> 
> --
> 
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux