Excuse my top posting: Kaamesh, There are two types of quorum implementations: 1) cluster.server-quorum-type: server/none cluster.server-quorum-ratio: (0-100%) default >50% Whose behavior is to kill the glusterfsd daemons (server) if server-quorum-ratio is not met. 2) And we have quorum type: cluster.quorum-type: fixed cluster.quorum-count: 1 Whose behavior is to make the cluster read-only if the quorum is not met. In your case if you don't want the cluster to go down (Without adding another node to the cluster), disable the cluster.server-quorum-type: server, and use the `cluster.quorum-type: fixed and cluster.quorum-count: 1' options. In your volume options I see that you are using both the above methods to achieve quorum. ref: http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options You can see some explanation in the above link. -sac ----- Original Message ----- > From: "ML mail" <mlnospam@xxxxxxxxx> > To: gluster-users@xxxxxxxxxxx > Sent: Monday, February 9, 2015 1:53:56 PM > Subject: Re: 2 Node glusterfs quorum help > > This seems to be a workaround, isn't there another proper way with the > configuration of the volume to achieve this? I would not like to have to > setup a third fake server just in order to avoid that. > > > > On Monday, February 9, 2015 2:27 AM, Kaamesh Kamalaaharan > <kaamesh@xxxxxxxxxxxxx> wrote: > > > It works! Thanks to craig's suggestion . i setup a third server without a > brick and added it to the trusted pool. now it doesnt go down. thanks alot > guys! > > Thank You Kindly, > Kaamesh > Bioinformatician > Novocraft Technologies Sdn Bhd > C-23A-05, 3 Two Square, Section 19, 46300 Petaling Jaya > Selangor Darul Ehsan > Malaysia > Mobile: +60176562635 > Ph: +60379600541 > Fax: +60379600540 > > On Mon, Feb 9, 2015 at 2:19 AM, < prmarino1@xxxxxxxxx > wrote: > > > > Quorum only applies when you have 3 or more bricks replicating each other. > In other words it doesn't mean any thing in a 2 node 2 brick cluster so it > shouldn't be set. > > In other words based on your settings it's acting correctly because it thinks > that the online brick needs to have a minimum of one other brick it agrees > with online. > > Sent from my BlackBerry 10 smartphone. > From: Kaamesh Kamalaaharan > Sent: Sunday, February 8, 2015 05:50 > To: gluster-users@xxxxxxxxxxx > Subject: 2 Node glusterfs quorum help > > Hi guys. I have a 2 node replicated gluster setup with the quorum count set > at 1 brick. By my understanding this means that the gluster will not go down > when one brick is disconnected. This however proves false and when one brick > is disconnected (i just pulled it off the network) the remaining brick goes > down as well and i lose my mount points on the server. > can anyone shed some light on whats wrong? > > my gfs config options are as following > > > Volume Name: gfsvolume > Type: Replicate > Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b > Status: Started > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: gfs1:/export/sda/brick > Brick2: gfs2:/export/sda/brick > Options Reconfigured: > cluster.quorum-count: 1 > auth.allow: 172.* > cluster.quorum-type: fixed > performance.cache-size: 1914589184 > performance.cache-refresh-timeout: 60 > cluster.data-self-heal-algorithm: diff > performance.write-behind-window-size: 4MB > nfs.trusted-write: off > nfs.addr-namelookup: off > cluster.server-quorum-type: server > performance.cache-max-file-size: 2MB > network.frame-timeout: 90 > network.ping-timeout: 30 > performance.quick-read: off > cluster.server-quorum-ratio: 50% > > > Thank You Kindly, > Kaamesh > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-users > > > _______________________________________________ > Gluster-users mailing list > Gluster-users@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-users > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-users _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users