Not real confident in 3.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/17/12 8:21 AM, Sean Fulton wrote:
> This was a Linux-HA cluster with a floating IP that the clients would 
> mount off of whichever server is active. So I set up a two-node 
> replicated cluster, which the floating IP and heartbeat, and the 
> client mounted the drive over the floating IP. I'm using the NFS 
> server built into gluster. So rpcbind and nfslock are running on the 
> server, but not nfs. The client writes to the one server with the 
> floating IP, and gluster takes care of keeping the volume in sync 
> between the two servers. I thought that was the way to do it. 
It has never been clear to me how well you can fail over a NFS mount 
using a floating IP address, especially with the Gluster NFS server. I 
typically install gluster on the client and have it mount 
localhost:/whatever and all the brick routing is handles locally. Not 
practical with a lot of clients, but it's a simpler configuration in a 
smaller environment.

If it makes you feel better, I'm in the process of restoring a 
production SVN repo because Gluster 3.2.5 ate it after a node reboot 
last night. Not had time to dig through the logs in detail, but it seems 
like a self-heal from a 4-way replica did something wrong.




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux