On Tue, Jan 26, 2010 at 5:15 PM, Jonathan Horne <loudredz71@xxxxxxxxx> wrote: > greetings, i am new to this list, my apologies if this topic has been touched on before (but so far searching has not yielded a direction for me to follow) > > i have 2 servers that share a file system over SAN. both servers export this file system as a NFS share. i need to implement some sort of clustering so that if one server goes down, clients dont lose their connection to the exported file system. im hoping to find some implementation that will have a pretty instantaneous failover, as there will be pretty constant filewrites/reads from multiple client servers. > > can i please get some recommendations on what i should research to make this happen? ill gladly accept tips, or documents to read, i can work with either. > > thanks, > Jonathan > Hi Jonathan, Welcome to the list. I have a large nfs cluster I manage. All of the configuration magic to reduce the downtime and headache is on the NFS client side. When there is a failover event, there is a short outage period for the IP to switch over to the other node. A lot of things affect this, including your switching (arp) environment. Is it a single export? If not, you can create each export as a separate service, with associating separate IP address and create an active/active type of NFS environment. Just a thought. Thanks, Terry -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster