Re: GFS2 over NFS4

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi guys,

@Fabio I have just realised that I have no fencing device at all as STONITH is set to false however some of my resources are set to fence when fail :/. There is really no choice for Hyper-V unless I will compile my own version of libvirt :(.

@Mohd this is what I actually trying to use. I managed to find out that localflocks needs to be used to mount GFS2 on the exporting nodes and basically my cluster is configured meeting all requirements in that document. So generally the idea is a bit complex to be honest. I'm going to have multiple nodes with shared VHDX mounted on each node on the cluster. However each share will be allocated to separate VIP and each node will export different resources. Resources are going to be linked to the IPs so by doing that all nodes in the cluster will be utilised and at the same time each node from the cluster would be able to take over all resources.

Maybe someone has different ideas ?

Regards,
TH

On 23 April 2015 at 14:05, Mohd Irwan Jamaluddin <mij@xxxxxxxxxx> wrote:
On Thu, Apr 23, 2015 at 6:11 PM, Thorvald Hallvardsson <thorvald.hallvardsson@xxxxxxxxx> wrote:
Hi guys,

I need some help and answers related to share GFS2 file system over NFS. I have read the RH documentation but still some things are a bit unclear to me.

First of all I need to build a POC for the shared storage cluster which initially will contain 3 nodes in the storage cluster. This is all going to run as a VM environment on Hyper-V. Generally the idea is to share virtual VHDX across 3 nodes, put LVM and GFS2 on top of it and then share it via NFS to the clients. I have got the initial cluster built on Centos 7 using pacemaker. I generally followed RH docs to build it so I ended up with the simple GFS2 cluster and pacemaker managing fencing and floating VIP resource.

Now I'm wondering about the NFS. RedHat documentation is a bit conflicting or rather unclear in some places and I found quite few manuals on the internet about similar configuration and generally some of them suggest to mount the NFS share on the clients with nolock option RH docs mention local flock and I got confused about what supposed to be where. Of course I don't know if my understanding is correct but the reason to "disable" NFS locking is because GFS2 is already doing it anyway via DLM so there is no need for NFS to do same thing what eventually mean that I will have some sort of double locking mechanism in place. So first question is where I suppose to setup locks or rather no locks and how the export should look like ?

Second thing is I was thinking about going a step forward and use NFS4 for the exports. However from what I have read about NFS4 it does locking by default and there is no way to disable them. Does that mean NFS4 is not suitable in this case at all ?

That's all for now.

I appreciate your help.


This is the latest KB regarding combination of GFS + NFS that I know of,

Does Red Hat recommend exporting GFS or GFS2 with NFS or Samba in a RHEL Resilient Storage cluster, and how should I configure it if I do?
https://access.redhat.com/solutions/20327 


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

-- 
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux