I've had zero issues using client quorum (cluster.quorum-type
=auto) and three nodes/bricks. Testing using node shutdown and node kills.On 29 October 2015 at 03:52, Alan Hodgson <ahodgson@xxxxxxxxx> wrote:
I have a couple of 2-node clusters I'm hoping to move from drbd+ocfs2 to
glusterfs. I've been testing with 3.7.4 and I have a few questions.
1) I gather that 2-node replicas have quorum issues, and if you disable
quorum, then they have split-brain issues. Do split-brains happen even if the
network link is very reliable - I have point-to-point 10Gbit, and the only
clients will be the brick servers? If so would it make more sense to make this
a 4-node cluster and use arbiter volumes?
2) Sharding looks pretty good, as the volumes will be used exclusively for
large VM backing images. I've been testing with 1GB shard sizes and
performance seems good. It seems that sharded volumes don't support discard,
though (ie. fstrim within VM guests). Is there a timeline on when that might
be implemented?. Discard seems to work right on non-sharded volumes, but then
heal times seem like they'll be an issue.
3) Is there operational documentation for maintenance procedures, like how to
properly shut down nodes in a way that won't impact clients? What I got from
a recent mailing list post suggests something like:
killall glusterfs
killall glusterfsd
systemctl stop glusterd
That seems to work in testing, the client VM's stay responsive. Is it safe?
Thanks in advance for any advice.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
--
Lindsay
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users