Re: I need a sanity check.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You are confusing volume with brick.

You do not have a "Replicate Brick", you have one 1x3 volume, composed of 3 bricks, and one 1x2 volume made up of 2 bricks. You do need to understand the difference between volume and brick

Also you need to be aware of the differences between server quorum and client quorum. For client quorum you need three bricks. For the third brick you can use an arbiter brick however.

Krist




On 4 July 2017 at 19:28, Ernie Dunbar <maillist@xxxxxxxxxxxxx> wrote:

Hi everyone!

I need a sanity check on our Server Quorum Ratio settings to ensure the maximum uptime for our virtual machines. I'd like to modify them slightly, but I'm not really interested in experimenting with live servers to see if what I'm doing is going to work, but I think that the theory is sound.

We have a Gluster array of 3 servers containing two Replicate bricks.

Brick 1 is a 1x3 arrangement where this brick is replicated on all three servers. The quorum ratio is set to 51%, so that if any one Gluster server goes down, the brick is still in Read/Write mode and the broken server will update itself when it comes back online. The clients won't notice a thing, while still ensuring that a split-brain condition doesn't occur.

Brick 2 is a 1x2 arrangement where this brick is replicated across only two servers. The quorum ratio is currently also set to 51%, but my understanding is that if one of the servers that hosts this brick goes down, it will go into read-only mode, which would probably be disruptive to the VMs we host on this brick.

My understanding is that since there are three servers in the array, I should be able to set the quorum ratio on Brick2 to 50% and the array will still be able to prevent a split-brain from occurring, because the other two servers will know which one is offline.

The alternative of course, is to simply flesh out Brick2 with a third disk. However, I've heard that 1x2 replication is faster than 1x3, and we'd prefer that extra speed for this task.


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users



--
Vriendelijke Groet |  Best Regards | Freundliche Grüße | Cordialement


Krist van Besien

senior architect, RHCE, RHCSA Open Stack

Red Hat Red Hat Switzerland S.A.

krist@xxxxxxxxxx    M: +41-79-5936260    

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux