Re: Configuration of a 2 node HA cluster with gfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Lon Hohberger wrote:
On Thu, 2005-04-14 at 09:53 +0200, birger wrote:
.
.
.
Fencing is required in order for CMAN to operate in any useful capacity
in 2-node mode.
I currently use manual fencing, as the other node in the cluster doesn't exist yet... :-)

Anyway, to make this short: You probably want fencing for your solution.
.
.
.
With GFS, the file locking should just kind of "work", but the client
would be required to fail over.  I don't think the Linux NFS client can
do this, but I believe the Solaris one can... (correct me if I'm wrong
here).
When I worked with sun clients they could select between alternative servers at mount, but not fail over from one server to another if the server became unavailable.

With a pure NFS failover solution (ex: on ext3, w/o replicated cluster
locks), there needs to be some changes to nfsd, lockd, and rpc.statd in
order to make lock failover work seamlessly.
I once did this on a sun system by stopping statd and merging the contents of /etc/sm* from the failing node to the takeover node and then restarting. This seemed to have statd/lockd rechecking the locks with the clients. I hoped something similar could be done on linux.

You can use rgmanager to do the IP and Samba failover.  Take a look at
"rgmanager/src/daemons/tests/*.conf".  I don't know how well Samba
failover has been tested.
This was a big help! The only documentation I found when searching for rgmanager on the net used <resourcegroup> instead of <service>. No wonder I couldn't get my services up!

I now have my NFS service starting with this block in cluster.conf:
<rm>
  <failoverdomains>
    <failoverdomain name="nfsdomain" ordered="0" restricted="1">
      <failoverdomainnode name="server1" priority="1"/>
      <failoverdomainnode name="server2" priority="2"/>
    </failoverdomain>
    <failoverdomain name="smbdomain" ordered="0" restricted="1">
      <failoverdomainnode name="server1" priority="2"/>
      <failoverdomainnode name="server2" priority="1"/>
    </failoverdomain>
  </failoverdomains>

  <resources>
  </resources>

  <service name="nfssvc" domain="nfsdomain">
    <ip address="X.X.X.X" monitor_link="yes"/>
    <script name="NFS script" file="/etc/init.d/nfs"/>
    <nfsexport name="NFS exports" mountpoint="/service/gfs001">
       <nfsclient name="nis-hosts" target="@nis-hosts" options="rw"/>
    </nfsexport>
  </service>

  <service name="smbsvc" domain="smbdomain">
  </service>
</rm>

It starts nfssvc, but smbsvc fails. No worry, since it is useless at the moment. server1 is the only existing server in the cluster.

The big surprise for me was that ifconfig and exportfs don't show the IP address and exports set up by the cluster, but at least the ip certainly works. My problem now is that I get permission denied when mounting on the clients, and the logfile on the server says the clients are unknown. Seems like it isn't resolving them, as they are listed only with ip address in the log. Or could it be that I cannot use <nfsexport> without a <fs>?

How would I normally go about having a gfs file system mounted at boot. Create a service bound to server1 that mounts it? Or can it be put in /etc/fstab? smbsvc is supposed to operate on the same file system, so I want the file system to always be there independent of the nfssvc and smbsvc.

Thanks for all help so far. I'm getting close... :-)

--
birger

--

Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux