From: "Alan Hodgson" <ahodgson@xxxxxxxxx>
To: gluster-users@xxxxxxxxxxx
Sent: Wednesday, October 28, 2015 11:22:24 PM
Subject: New user, couple of questions (sharding+discard, arbiters, shutting down nodes)I have a couple of 2-node clusters I'm hoping to move from drbd+ocfs2 to
glusterfs. I've been testing with 3.7.4 and I have a few questions.1) I gather that 2-node replicas have quorum issues, and if you disable
quorum, then they have split-brain issues. Do split-brains happen even if the
network link is very reliable - I have point-to-point 10Gbit, and the only
clients will be the brick servers? If so would it make more sense to make this
a 4-node cluster and use arbiter volumes?2) Sharding looks pretty good, as the volumes will be used exclusively for
large VM backing images. I've been testing with 1GB shard sizes and
performance seems good. It seems that sharded volumes don't support discard,
though (ie. fstrim within VM guests). Is there a timeline on when that might
be implemented?. Discard seems to work right on non-sharded volumes, but then
heal times seem like they'll be an issue.
Thanks for the feedback, Alan.
So 3.7.6 release is just two days away, and therefore it will not be possible to get discard implementation
in by then. You can expect it to be available for 3.7.7 though (which would be end of Nov).
Meanwhile, you can track its progress at https://bugzilla.redhat.com/show_bug.cgi?id=1261841.
-Krutika
3) Is there operational documentation for maintenance procedures, like how to
properly shut down nodes in a way that won't impact clients? What I got from
a recent mailing list post suggests something like:killall glusterfs
killall glusterfsd
systemctl stop glusterdThat seems to work in testing, the client VM's stay responsive. Is it safe?Thanks in advance for any advice.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users