GFS+DRBD+Quorum: Help wrap my brain around this

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm trying to figure out the best solution for GFS+DRBD.  My mental
block isn't really with GFS, though, but with clustered LVM (I think).

I understand the quorum problem with a two-node cluster.  And I
understand that DRBD is not suitable for use as a quorum disk
(presumably because it too would suffer from any partitioning, unlike a
physical array connected directly to both nodes).

Am I right so far?

What I'd really like to do is have a three (or more) node cluster with
two nodes having access to the DRBD storage.  This solves the quorum
problem (effectively having the third node as a quorum server).

But when I try to create a volume on a volume group on a device shared
by two nodes of a three node cluster, I get an error indicating that the
volume group cannot be found on the third node.  Which is true: the
shared volume isn't available on that node.

In the Cluster Logical Volume Manager document, I found:

        By default, logical volumes created with CLVM on shared storage
        are visible to all computers that have access to the shared
        storage. 
        
What I've not figured out is how to tell CLVMD (or whomever) that only
nodes one and two have access to the shared storage.  Is there a way to
do this? 

I've also read, in the GFS2 Overview document:

        When you configure a GFS2 file system as a cluster file system,
        you must ensure that all nodes in the cluster have access to the
        shared storage

This suggests that a cluster running GFS must have access to the storage
on all nodes.  Which would clearly block my idea for a three node
cluster with only two nodes having access to the shared storage.

I do have one idea, but it sounds like a more complex version of a Rube
Goldberg device: A two node cluster with a third machine providing
access to a device via iSCSI.  The LUN exported from that third system
could be used as the quorum disk by the two cluster nodes (effectively
making that little iSCSI target the quorum server).

This assumes that a failure of the quorum disk in an otherwise healthy
two node cluster is survived.  I've yet to confirm this.

This seems ridiculously complex, so much so that I cannot imagine that
there's not a better solution.  But I just cannot get my brain wrapped
around this well enough to see it.

Any suggestions would be very welcome.

Thanks...

	Andrew


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux