RE: Remove the clusterness from GFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2007-01-10 at 15:00 -0800, Lin Shen (lshen) wrote:

> For instance, we support hot removal/insertion of nodes in the system,
> I'm not clear how fencing will get in the way. We're not planning to add
> any fencing hardware, and most likely will set fencing mechanism as
> manual. Ideally, we'd like to disable fencing except the part that is
> needed for running GFS.

Hmm, well, GFS requires every node mounting a volume directly to have
fencing.

You can use NFS to export the same GFS volume from multiple servers.
The idea here is that with more than one NFS server exporting the same
file system, you can achieve very high data parallel data throughput -
near the maximum the SAN allows - because the network bandwidth and
server bottlenecks are, in theory, eliminated.

This solution requires building a GFS cluster, say, 3 or 5 nodes + a
SAN.  Make one or more GFS volumes on the SAN, and mount on all nodes.
Export from all nodes.  Adding more clients is simple.  Just mount the
NFS export.  Fencing is needed for the GFS cluster, but not the NFS
clients.

You could do the same thing with Lustre (sort of).  Build a server
cluster, and mount over the network.  You'd only need fencing hardware
for the metadata server (I *think*; never tried it).  Adding a client is
easy: set up Lustre on the client and mount the file system.

There's some "waste" in the sense that to build either of these
solutions, you need several machines that act as a "storage farm" for
the best possible reliability.  

-- Lon


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux