Re: Please correct me if I'm wrong, but...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Randy Brown wrote:
> in order to configure a two-node high availability NFS failover cluster,
> I need to use GFS, correct?

No. Both options accomplish the same objective: sharing an storage space
and providing seamless services to your users. I'll try and explain my
experience with GFS vs. NFS, and I'm already using GFS because of things
I'll expose now. Please note that I'm not an expert in this topic, so
ignore me if this sounds insane.

So, you have a SAN and a media to access it from two nodes (say, Fiber
Channel) -- both of your nodes will 'see' a block device, where you can
make a filesystem and mount it. Looks tempting.

If you use 'ext3', you can mount the filesystem in both nodes, but
changes in one node won't be seen by the other, since changes to the
inode table reside on each node and they have no way to tell each other
something has changed.

You need some way to 'share' the filesystem. You can do this by means of
a client/server filesystem (such as NFS, Samba or Coda) or by a so
called 'distributed filesystem' such as GFS or (ew!) OCFS.

With GFS, you format the device each node see as GFS, and you can mount
it in both nodes. You need some underlying services, such as ccs, cman
and lockd, in order to get both nodes communicated. They send each other
control messages (such as 'hey, i'm using this file') and they monitor
each other ('hey, are you there?') and fence (= nightmare) properly.

This communication media is usually Ethernet, and it can get as bad as
10BaseT. This is an 'active-active' configuration, if you want to see it
that way. With NFS, you go 'active-passive': one of your nodes mount the
device and export it, the other one mounts the export.

In this case, data is shared through the communication media, and if
it's 10BaseT, then you're bottlenecking something. That's why some
people will tell you that you _need_ to use GFS.

If you need to do high-availability, then you need the passive node to
be able to mount the device and share it as an NFS export (the latter
might not be necessary, if you think of it) and do that quickly without
users being impacted. NFS helps by being stateless, and you need
something else, such as Heartbeat, to 'raise' the resources when the
other node goes down.

The 'Schillinger page' [1] proved to be a nice source of information
regarding active-active NFS using non-vendor-locked tools. It's now
down, but Google has it cached.

Finally, I want you to know that I'm a Debian user (and the whole
cluster is Debian-based), and I find GFS a quite nice piece of software.
I didn't pay anything for it, it's free (both as in beer and as in a
speech) and it works OK for me, so don't worry if it's not in your
'package' -- you should be able to use it in any Linux system.

Hope this helps,
Jose

[1]
http://64.233.169.104/search?q=cache:uGsLGtBSuWoJ:chilli.linuxmds.com/~mschilli/NFS/active-active-nfs.html+active+active+NFS&hl=es&ct=clnk&cd=1&client=iceweasel-a

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux