Re: partitioning of filesystems in cluster nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



RR wrote:
Right, it is indeed what I want to do. But now let me understand the basics
of GFS. GFS actually runs on the SAN but the GFS drivers/software that I
install on each of my cluster node just allows each of these nodes to see
these volumes? Something analogous to say iscsi-initiators on a node to via
the LUNs on an iSCSI SAN? If that's true, then is it possible for me to say
have my /opt/local installed on the GFS managed filesystem on the SAN such
that whatever application is installed once in this directory can be
accessed by all  nodes mounting that filesystem? So kind of get install
once, use everywhere kind of a deal?

Thanks so much
RR
Hi RR,

GFS is the file system that runs on each of the nodes in the cluster. It's basically a kernel device driver that controls how and where the data is stored on a logical volume. In order to make a bunch of computers ("nodes") cooperatively share the data on a SAN, you need GFS's ability to coordinate with a cluster locking protocol. One such cluster locking protocol is dlm, the distributed lock manager, which is also a kernel device driver. It's job is to ensure that nodes in the cluster who share the
data on the SAN don't corrupt each other's data.

Since GFS manages the contents of a logical volume, there is still the underlying logical volume manager, LVM, that takes care of things like spanning physical
volumes, striping, hardware and software RAID, mirroring and such.
For GFS, there is a special version of LVM called LVM2 that is needed, but
not much changes other than the locking protocol specified in /etc/lvm/lvm.conf.

This only applies to RHEL4, by the way.  The web page you referenced was for
RHEL3, and in RHEL3 the only mirroring available was md - which GFS does not
work with.  GFS will only work with device-mapper (LVM2) mirroring
(specifically cluster mirroring).

Now there are lots of other little pieces to Cluster Suite besides GFS that are necessary to make a cluster work: (1) Fencing protects your data from split-brain
corruption when a node has hardware failure and stops communicating.
(2) Cluster manager (CMAN) handles communications between nodes.
(3) Resource Group Manager (rgmanager) handles the starting, stopping and
moving of cluster services such as NFS when nodes fail, etc. (if you have any).
I hope this helps.

Regards,

Bob Peterson
Red Hat Cluster Suite

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux