Hi all, I'm setting up a 2 nodes (physical) GlusterFS cluster (3.5 latest nightly) where each node acts also as an oVirt host (3.4 latest nightly with self-hosted engine; GlusterFS access through NFS/FUSE while waiting for some bug fixing etc.).
Since I already properly configured fencing (power management) in oVirt, I'm currently configuring each GlusterFS volume with:
gluster volume set VOLUMENAME cluster.server-quorum-type none
gluster volume set VOLUMENAME cluster.quorum-type none
since I need the surviving node to remain up and responsive (without excessive "delays" at single-node-failure time) in case of a single node failure.
Limited testing has shown that the "surviving" node somewhat "halts" (but I did not try waiting for more than few minutes, maybe less) if, for example, I put the other node in maintenance through oVirt and reboot it.
Is this the proper way of achieving what I need?
Many thanks in advance for any hint/suggestion/docs-to-read.
Regards,
Giuseppe
PS: I posted the same question some time ago on gluster-devel but then realized it's not really devel related...
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users