Re: 2 Node glusterfs quorum help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This seems to be a workaround, isn't there another proper way with the configuration of the volume to achieve this? I would not like to have to setup a third fake server just in order to avoid that.



On Monday, February 9, 2015 2:27 AM, Kaamesh Kamalaaharan <kaamesh@xxxxxxxxxxxxx> wrote:


It works! Thanks to craig's suggestion . i setup a third server without a brick and added it to the trusted pool. now it doesnt go down. thanks alot guys!

Thank You Kindly,
Kaamesh
Bioinformatician
Novocraft Technologies Sdn Bhd
C-23A-05, 3 Two Square, Section 19, 46300 Petaling Jaya
Selangor Darul Ehsan
Malaysia
Mobile: +60176562635

On Mon, Feb 9, 2015 at 2:19 AM, <prmarino1@xxxxxxxxx> wrote:
Quorum only appli‎es when you have 3 or more bricks replicating each other. In other words it doesn't mean any thing in a 2 node 2 brick cluster so it shouldn't be set.

In other words based on your settings it's acting correctly because it thinks that the online brick needs to have a minimum of one other brick it agrees with online.

Sent from my BlackBerry 10 smartphone.
From: Kaamesh Kamalaaharan
Sent: Sunday, February 8, 2015 05:50
Subject: 2 Node glusterfs quorum help

Hi guys. I have a 2 node replicated gluster  setup with the quorum count set at 1 brick. By my understanding this means that the gluster will not  go down when one brick is disconnected. This however proves false and when one brick is disconnected (i just pulled it off the network) the remaining brick goes down as well and i lose my mount points on the server. 
can anyone shed some light on whats wrong? 

my gfs config options are as following


Volume Name: gfsvolume
Type: Replicate
Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gfs1:/export/sda/brick
Brick2: gfs2:/export/sda/brick
Options Reconfigured:
cluster.quorum-count: 1
auth.allow: 172.*
cluster.quorum-type: fixed
performance.cache-size: 1914589184
performance.cache-refresh-timeout: 60
cluster.data-self-heal-algorithm: diff
performance.write-behind-window-size: 4MB
nfs.trusted-write: off
nfs.addr-namelookup: off
cluster.server-quorum-type: server
performance.cache-max-file-size: 2MB
network.frame-timeout: 90
network.ping-timeout: 30
performance.quick-read: off
cluster.server-quorum-ratio: 50%


Thank You Kindly,
Kaamesh



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux