Hi guys. I have a 2 node replicated gluster setup with the quorum count set at 1 brick. By my understanding this means that the gluster will not go down when one brick is disconnected. This however proves false and when one brick is disconnected (i just pulled it off the network) the remaining brick goes down as well and i lose my mount points on the server.
can anyone shed some light on whats wrong?
my gfs config options are as following
Volume Name: gfsvolume
Type: Replicate
Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gfs1:/export/sda/brick
Brick2: gfs2:/export/sda/brick
Options Reconfigured:
cluster.quorum-count: 1
auth.allow: 172.*
cluster.quorum-type: fixed
performance.cache-size: 1914589184
performance.cache-refresh-timeout: 60
cluster.data-self-heal-algorithm: diff
performance.write-behind-window-size: 4MB
nfs.trusted-write: off
nfs.addr-namelookup: off
cluster.server-quorum-type: server
performance.cache-max-file-size: 2MB
network.frame-timeout: 90
network.ping-timeout: 30
performance.quick-read: off
cluster.server-quorum-ratio: 50%
Thank You Kindly,
Kaamesh
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users