Hello Community,
I'm new
to glusterfs and I hope if I ask silly questions to just lead
me to the correct doc page.
I am
using oVirt for lab with a gluster 3 arbiter 1 volumes which
were created by the ovirt interface.
Everything
is fine , but as my arbiter is far away - the latency is
killing my performance.Thankfully gluster has a nice option
called thin-arbiter , but I would like to not destroy the
current volumes.
Can
someone tell me how to replace an arbiter with thin-arbiter as
add-brick (after removal of current one) doesn't have the
option for 'thin-arbiter' ?
Am I
using an old
version which do not have the thin-arbiter ?
Here is
some output:
# glusterfs --version
glusterfs 3.12.2
# gluster volume info engine
Volume Name: engine
Type: Replicate
Volume ID: daddea78-204d-42b5-9794-11d5518d61e0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/gluster_bricks/engine/engine
Brick2: ovirt2.localdomain:/gluster_bricks/engine/engine
Brick3: glarbiter.localdomain:/gluster_bricks/engine/engine
(arbiter)
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: off
client.event-threads: 4
server.event-threads: 4
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
# gluster volume remove-brick engine replica 2
glarbiter.localdomain:/gluster_bricks/engine/engine
force
ain:/gluster_bricks/engine/engine force
Removing brick(s) can result in data loss. Do you want to
Continue? (y/n) y
volume remove-brick commit force: success
# gluster volume info engine
Volume Name: engine
Type: Replicate
Volume ID: daddea78-204d-42b5-9794-11d5518d61e0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/gluster_bricks/engine/engine
Brick2: ovirt2.localdomain:/gluster_bricks/engine/engine
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: off
client.event-threads: 4
server.event-threads: 4
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
# gluster volume add-brick engine replica 2 thin-arbiter 1
glarbiter.localdomain:/gluster_bricks/engine/engine force
Wrong brick type: thin-arbiter, use
<HOSTNAME>:<export-dir-abs-path>
Usage:
volume add-brick <VOLNAME> [<stripe|replica>
<COUNT> [arbiter <COUNT>]] <NEW-BRICK> ...
[force]
# gluster volume add-brick engine replica 3 arbiter 1
glarbiter.localdomain:/gluster_bricks/engine/engine force
volume add-brick: success
#
gluster volume create
Usage:
volume create <NEW-VOLNAME> [stripe <COUNT>]
[replica <COUNT> [arbiter <COUNT>]] [disperse
[<COUNT>]] [disperse-data <COUNT>] [redundancy
<COUNT>] [transport <tcp|rdma|tcp,rdma>]
<NEW-BRICK>?<vg_name>... [force]
Thanks in advance for your help.
Best
Regards,
Strahil
Nikolov