gluster quorum settings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

we just created 3 node gluster ( replica 3 arbiter 1 ) and get "systemctl status glusterd" message:

n1.test.net etc-glusterfs-glusterd.vol[1458]: [2017-02-03 17:56:24.691334] C [MSGID: 106003] [glusterd-server-quorum.c:341:glusterd_do_volume_quorum_action] 0-management: Server quorum regained for volume TESTp1. Starting local bricks.

How can we setup gluster quorum params to eliminate this warning and to aviod split brain and writeable if any single node goes down ?

current settings:

server.event-threads: 8
client.event-threads: 8
performance.io-thread-count: 20
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto             # we are not shure to be 100% successfull for split brain ( update nodes eg. )
cluster.server-quorum-type: server    # it looks to be OK
features.shard: on
cluster.data-self-heal-algorithm: diff
storage.owner-uid: 36
storage.owner-gid: 36
server.allow-insecure: on
network.ping-timeout: 10

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/

regrads
Bap.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux