On Fri, Apr 27, 2018 at 07:22:29PM +1200, Thing wrote: > I have 4 nodes, so a quorum would be 3 of 4. Nope, gluster doesn't work quite the way you're looking at it. (Incidentally, I started off with the same expectation that you have.) When you create a 4-brick replica 2 volume, you don't get a single cluster with a quorum of 3 out of 4 bricks. You get two subvolumes, each of which consists of two mirrored bricks. Each individual subvolume is susceptible to split-brain if one of the two bricks in that pair goes down, regardless of how many bricks in other subvolumes are still up. Thus, quorum has to be handled on the subvolume level rather than only being a consideration for the overall volume as a whole. One small wrinkle here is that, for calculating quorum, gluster treats the brick in each pair which was listed first when you created the volume as "one plus epsilon", so the subvolume will continue to operate normally if the second brick goes down, but not if the first brick is missing. The easy solution to this is to switch from replica 2 to replica 2 + arbiter. Arbiter bricks don't need to be nearly as large as data bricks because they store only file metadata, not file contents, so you can just scrape up a little spare disk space on two of your boxes, call that space an arbiter, and run with it. In my case, I have 10T data bricks and 100G arbiter bricks; I'm using a total of under 1G across all arbiter bricks for 3T of data in the volume. -- Dave Sherohman _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users