Gluster does the sync part better than corosync. It's not an active/passive failover system. It more all active. Gluster handles the recovery once all nodes are back online.
That requires the client tool chain to understand that a write goes to all storage devices not just the active one.
3.10 is a long term support release. Upgrading to 3.12 or 4 is not a significant issue once a replacement for NFS-ganesha stabilizes.
Kernel NFS doesn't understand "write to two IP addresses". That's what NFS-Ganesha does. The gluster-fuse client works but is slower than most people like. I use the fuse process in my setup at work. Will be changing to NFS-Ganesha as part of the upgrade to 3.10.
On Wed, 2018-03-07 at 14:50 -0500, Ben Mason wrote:
Hello,I'm designing a 2-node, HA NAS that must support NFS. I had planned on using GlusterFS native NFS until I saw that it is being deprecated. Then, I was going to use GlusterFS + NFS-Ganesha until I saw that the Ganesha HA support ended after 3.10 and its replacement is still a WIP. So, I landed on GlusterFS + kernel NFS + corosync & pacemaker, which seems to work quite well. Are there any performance issues or other concerns with using GlusterFS as a replication layer and kernel NFS on top of that?Thanks!_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users
--James P. Kinney III Every time you stop a school, you will have to build a jail. What you gain at one end you lose at the other. It's like feeding a dog on his own tail. It won't fatten the dog. - Speech 11/23/1900 Mark Twain http://heretothereideas.blogspot.com/
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users