http://sources.redhat.com/cluster/doc/nfscookbook.pdf This cookbook, starting at about page 14 talks about creating an NFS resource. Paul ----- Original Message ----- From: "Jonathan Horne" <loudredz71@xxxxxxxxx> To: "linux clustering" <linux-cluster@xxxxxxxxxx> Sent: Tuesday, January 26, 2010 6:36:07 PM (GMT-0600) America/Chicago Subject: Re: NFS clustering, with SAN --- On Tue, 1/26/10, Terry <td3201@xxxxxxxxx> wrote: > From: Terry <td3201@xxxxxxxxx> > Subject: Re: NFS clustering, with SAN > To: "linux clustering" <linux-cluster@xxxxxxxxxx> > Date: Tuesday, January 26, 2010, 5:57 PM > On Tue, Jan 26, 2010 at 5:15 PM, > Jonathan Horne <loudredz71@xxxxxxxxx> > wrote: > > greetings, i am new to this list, my apologies if this > topic has been touched on before (but so far searching has > not yielded a direction for me to follow) > > > > i have 2 servers that share a file system over SAN. > both servers export this file system as a NFS share. i > need to implement some sort of clustering so that if one > server goes down, clients dont lose their connection to the > exported file system. im hoping to find some > implementation that will have a pretty instantaneous > failover, as there will be pretty constant filewrites/reads > from multiple client servers. > > > > can i please get some recommendations on what i should > research to make this happen? ill gladly accept tips, or > documents to read, i can work with either. > > > > thanks, > > Jonathan > > > > Hi Jonathan, > > Welcome to the list. I have a large nfs cluster I > manage. All of the > configuration magic to reduce the downtime and headache is > on the NFS > client side. When there is a failover event, there is > a short outage > period for the IP to switch over to the other node. A > lot of things > affect this, including your switching (arp) > environment. Is it a > single export? If not, you can create each export as > a separate > service, with associating separate IP address and create > an > active/active type of NFS environment. Just a > thought. > > Thanks, > Terry > > -- > Linux-cluster mailing list > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster > Terry, thanks for the reply. my setup is like this: 2 servers, with a shared LUN. LUN is mounted on both as /opt/data, both have /opt/data listed in the /etc/exports file. so basically 2 servers, and showmount -e against both of them show the exact same thing (and of course, its a given that since its a shared LUN (ofcs2 file system), both exports contain the exact same data). right now were in the testing phase of this project, and there are 4 NFS clients that are connecting (2 oracle servers writing files to the export, and 2 weblogic servers reading the files from oracle from the export). ultimately trying to fix this up in an HA setup, so that if one NFS server drops off, the clients dont know the difference. environment is fully switched, and all nodes (NFS servers and clients) have bonded network interfaces connected to separate switches (which connect via port-channel). you mentioned that you have setup on the client side that takes care of your failover headaches, im interested in hearing more about how this works. thanks, Jonathan -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster