Just be more Clear,
I have two Cluster Nodes: Edwin1 and Edwin2.
Edwin2 is NFS Server while Edwin1 is NFS Client.
Anyway,NFS will be running on Active/Active ie both are running NFS simultanously.
On Edwin2 we can see the following configuration:
# df –h
10.227.167.5:/usr/bang-test/xml on /var/tmp/kunal type nfs (rw,soft,addr=10.227.167.5)
/dev/sdj1 on /usr/bang-test type ext3 (rw)
[[[[Note: As for Now /dev/sdj1 which is a shared storage partition is right now mounted on
Edwin2.As We do Clusvcadm -r bang -m edwin1 command,it will relocate to edwin1 node.]]]
I have mounted /usr/bang-test/xml to /var/tmp/kunal as seen above.
Corresponding to that I made the entry under NFS Mount on LUCI as:
NFS Mount Resource Configuration
Name ---> NFSMount
Mount point ---> /var/tmp/kunal
Host --> 10.227.169.3
Export path --> 10.227.167.5:/usr/bang-test/xml
NFS version NFS3 -> <Default>
NFS4
Options
OK.
On Edwin1 Now, The mount point /dev/sdj1 couldnt be seen.(After failover it will be seen)
#df -h
10.227.167.5:/usr/bang-test/xml
206G 360M 195G 1% /var/tmp/kunal
Now When I have written another Service Script called bang which mean now I have two Script
in hand.I have added both to the cluster.
As you see output of Clustat:
#clustat
edwin2# clustat
Member Status: Quorate
Member Name Status
------ ---- ------
edwin1-cluster Online, rgmanager
edwin2-cluster Online, Local, rgmanager
Service Name Owner (Last) State
------- ---- ----- ------ -----
bang edwin2-cluster started
NFSCluster edwin1-cluster started
So Now Both the service are added.When I now perform failover it doesnt appear working.
Is there anything mischievious I am doing?
Pls Advise
_____________________________________________
From: Singh Raina, Ajeet
Sent: Thursday, August 14, 2008 6:03 PM
To: 'linux clustering'
Cc: 'piyush yaduvanshi'
Subject: NFS Issue in Cluster??
Hello Guys,
I have a doubt and I hope you people gonna Help me with.
I have been stucked with including NFS Resource and Services.
Let me tell you….I want to run NFS Service on one of the red hat cluster node .ON failover to the second cluster node the NFS should come up.
But I donno why the way to configure that has been so complicated written.
All I can see is:
[code]
1. NFS Mount
Name — Create a symbolic name for the NFS mount.
Mount Point — Choose the path to which the file system resource is mounted.
Host — Specify the NFS server name.
Export Path — NFS export on the server.
NFS version — Specify NFS protocol:
o NFS3 — Specifies using NFSv3 protocol. The default setting is NFS.
o NFS4 — Specifies using NFSv4 protocol.
Options — Mount options. For more information, refer to the nfs(5) man page.
Force Unmount checkbox — If checked, forces the file system to unmount. The default setting is unchecked. Force Unmount kills all processes using the mount point to free up the mount when it tries to unmount.
NFS Client
Name — Enter a name for the NFS client resource.
Target — Enter a target for the NFS client resource. Supported targets are hostnames, IP addresses (with wild-card support), and netgroups.
Options — Additional client access rights. For more information, refer to the exports(5) man page, General Options
NFS Export
Name — Enter a name for the NFS export resource.
Please Help me understanding what’s the difference between NFS Mount,NFS Export and NFS Client in this context.
I just want to do failover ie. When one node has NFS running then on being stopped NFS Should be starting at the other end.
Pls Help
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster