Hi all,
I am currently upgrading my ovirt cluster and after doing the upgrade on one node i end up having unsync entries that heal by the headl command.
My setup is a 2+1 with 4 volume.
here is a snapshot of one a volume info
Volume Name: data
Type: Replicate
Volume ID: 71c999a4-b769-471f-8169-a1a66b28f9b0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovhost1:/gluster_bricks/data/data
Brick2: ovhost2:/gluster_bricks/data/data
Brick3: ovhost3:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
server.allow-insecure: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: enable
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
features.shard-block-size: 64MB
Type: Replicate
Volume ID: 71c999a4-b769-471f-8169-a1a66b28f9b0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovhost1:/gluster_bricks/data/data
Brick2: ovhost2:/gluster_bricks/data/data
Brick3: ovhost3:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
server.allow-insecure: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: enable
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
features.shard-block-size: 64MB
Also the output of v headl data info
gluster> v heal data info
Brick ovhost1:/gluster_bricks/data/data
/4e59777c-5b7b-4bf1-8463-1c818067955e/dom_md/ids
/__DIRECT_IO_TEST__
Status: Connected
Number of entries: 2
Brick ovhost2:/gluster_bricks/data/data
Status: Connected
Number of entries: 0
Brick ovhost3:/gluster_bricks/data/data
/4e59777c-5b7b-4bf1-8463-1c818067955e/dom_md/ids
/__DIRECT_IO_TEST__
Status: Connected
Number of entries: 2
Brick ovhost1:/gluster_bricks/data/data
/4e59777c-5b7b-4bf1-8463-1c818067955e/dom_md/ids
/__DIRECT_IO_TEST__
Status: Connected
Number of entries: 2
Brick ovhost2:/gluster_bricks/data/data
Status: Connected
Number of entries: 0
Brick ovhost3:/gluster_bricks/data/data
/4e59777c-5b7b-4bf1-8463-1c818067955e/dom_md/ids
/__DIRECT_IO_TEST__
Status: Connected
Number of entries: 2
does not seem to be a split brain also.
gluster> v heal data info split-brain
Brick ovhost1:/gluster_bricks/data/data
Status: Connected
Number of entries in split-brain: 0
Brick ovhost2:/gluster_bricks/data/data
Status: Connected
Number of entries in split-brain: 0
Brick ovhost3:/gluster_bricks/data/data
Status: Connected
Number of entries in split-brain: 0
Brick ovhost1:/gluster_bricks/data/data
Status: Connected
Number of entries in split-brain: 0
Brick ovhost2:/gluster_bricks/data/data
Status: Connected
Number of entries in split-brain: 0
Brick ovhost3:/gluster_bricks/data/data
Status: Connected
Number of entries in split-brain: 0
not sure how to resolve this issue.
gluster version is 3.2.15
Regards
Carl
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users