On 07/21/2017 11:41 PM, yayo (j) wrote:
Hi,
Sorry for follow up again, but, checking the ovirt
interface I've found that ovirt report the "engine" volume as
an "arbiter" configuration and the "data" volume as full
replicated volume. Check these screenshots:
This is probably some refresh bug in the UI, Sahina might be able to
tell you.
But the "gluster volume info" command report that all 2
volume are full replicated:
Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d
Number of Bricks: 1 x 3 = 3
Brick1: gdnode01:/gluster/data/brick
Brick2: gdnode02:/gluster/data/brick
Brick3: gdnode04:/gluster/data/brick
performance.readdir-ahead: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: enable
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard-block-size: 512MB
performance.strict-o-direct: on
cluster.granular-entry-heal: on
server.allow-insecure: on
I don't think having extra entries could be a problem. Did you check
the fuse mount logs for disconnect messages that I referred to in
the other email?
Not sure about this. See if there are disconnect messages in the
mount logs first.
-Ravi
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users