Hi Strahil
thanks for your reply. Just to better explain my setup, while I am using the same nodes for oVirt and Gluster I do manage the two independently (so Gluster is not managed by oVirt).
See below for the output you have requested:
gluster pool list
UUID Hostname State
a2a62dd6-49b2-4eb6-a7e2-59c75723f5c7 ovirt-node3-storage Connected
83f24b13-eaad-4443-9dc3-0152b74385f4 ovirt-node2-storage Connected
acb80b35-d6ac-4085-87cd-ba69ff3f81e6 localhost Connected
a2a62dd6-49b2-4eb6-a7e2-59c75723f5c7 ovirt-node3-storage Connected
83f24b13-eaad-4443-9dc3-0152b74385f4 ovirt-node2-storage Connected
acb80b35-d6ac-4085-87cd-ba69ff3f81e6 localhost Connected
For simplicity I will only send the output of one of the affected volumes for the info command:
Volume Name: VM_Storage_1
Type: Distributed-Replicate
Volume ID: 1a4e23db-1c98-4d89-b888-b4ae2e0ad5fc
Status: Started
Snapshot Count: 0
Number of Bricks: 9 x (2 + 1) = 27
Transport-type: tcp
Bricks:
Brick1: lab-cnvirt-h01-storage:/bricks/vm_b1_vol/brick
Brick2: lab-cnvirt-h02-storage:/bricks/vm_b1_vol/brick
Brick3: lab-cnvirt-h03-storage:/bricks/vm_b1_arb/brick (arbiter)
Brick4: lab-cnvirt-h03-storage:/bricks/vm_b1_vol/brick
Brick5: lab-cnvirt-h01-storage:/bricks/vm_b2_vol/brick
Brick6: lab-cnvirt-h02-storage:/bricks/vm_b2_arb/brick (arbiter)
Brick7: lab-cnvirt-h02-storage:/bricks/vm_b2_vol/brick
Brick8: lab-cnvirt-h03-storage:/bricks/vm_b2_vol/brick
Brick9: lab-cnvirt-h01-storage:/bricks/vm_b1_arb/brick (arbiter)
Brick10: lab-cnvirt-h01-storage:/bricks/vm_b3_vol/brick
Brick11: lab-cnvirt-h02-storage:/bricks/vm_b3_vol/brick
Brick12: lab-cnvirt-h03-storage:/bricks/vm_b3_arb/brick (arbiter)
Brick13: lab-cnvirt-h03-storage:/bricks/vm_b3_vol/brick
Brick14: lab-cnvirt-h01-storage:/bricks/vm_b4_vol/brick
Brick15: lab-cnvirt-h02-storage:/bricks/vm_b4_arb/brick (arbiter)
Brick16: lab-cnvirt-h02-storage:/bricks/vm_b4_vol/brick
Brick17: lab-cnvirt-h03-storage:/bricks/vm_b4_vol/brick
Brick18: lab-cnvirt-h01-storage:/bricks/vm_b3_arb/brick (arbiter)
Brick19: lab-cnvirt-h01-storage:/bricks/vm_b5_vol/brick
Brick20: lab-cnvirt-h02-storage:/bricks/vm_b5_vol/brick
Brick21: lab-cnvirt-h03-storage:/bricks/vm_b5_arb/brick (arbiter)
Brick22: lab-cnvirt-h03-storage:/bricks/vm_b5_vol/brick
Brick23: lab-cnvirt-h01-storage:/bricks/vm_b6_vol/brick
Brick24: lab-cnvirt-h02-storage:/bricks/vm_b6_arb/brick (arbiter)
Brick25: lab-cnvirt-h02-storage:/bricks/vm_b6_vol/brick
Brick26: lab-cnvirt-h03-storage:/bricks/vm_b6_vol/brick
Brick27: lab-cnvirt-h01-storage:/bricks/vm_b5_arb/brick (arbiter)
Options Reconfigured:
cluster.self-heal-daemon: enable
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
cluster.read-hash-mode: 3
performance.client-io-threads: on
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.fips-mode-rchecksum: on
nfs.disable: on
transport.address-family: inet
Type: Distributed-Replicate
Volume ID: 1a4e23db-1c98-4d89-b888-b4ae2e0ad5fc
Status: Started
Snapshot Count: 0
Number of Bricks: 9 x (2 + 1) = 27
Transport-type: tcp
Bricks:
Brick1: lab-cnvirt-h01-storage:/bricks/vm_b1_vol/brick
Brick2: lab-cnvirt-h02-storage:/bricks/vm_b1_vol/brick
Brick3: lab-cnvirt-h03-storage:/bricks/vm_b1_arb/brick (arbiter)
Brick4: lab-cnvirt-h03-storage:/bricks/vm_b1_vol/brick
Brick5: lab-cnvirt-h01-storage:/bricks/vm_b2_vol/brick
Brick6: lab-cnvirt-h02-storage:/bricks/vm_b2_arb/brick (arbiter)
Brick7: lab-cnvirt-h02-storage:/bricks/vm_b2_vol/brick
Brick8: lab-cnvirt-h03-storage:/bricks/vm_b2_vol/brick
Brick9: lab-cnvirt-h01-storage:/bricks/vm_b1_arb/brick (arbiter)
Brick10: lab-cnvirt-h01-storage:/bricks/vm_b3_vol/brick
Brick11: lab-cnvirt-h02-storage:/bricks/vm_b3_vol/brick
Brick12: lab-cnvirt-h03-storage:/bricks/vm_b3_arb/brick (arbiter)
Brick13: lab-cnvirt-h03-storage:/bricks/vm_b3_vol/brick
Brick14: lab-cnvirt-h01-storage:/bricks/vm_b4_vol/brick
Brick15: lab-cnvirt-h02-storage:/bricks/vm_b4_arb/brick (arbiter)
Brick16: lab-cnvirt-h02-storage:/bricks/vm_b4_vol/brick
Brick17: lab-cnvirt-h03-storage:/bricks/vm_b4_vol/brick
Brick18: lab-cnvirt-h01-storage:/bricks/vm_b3_arb/brick (arbiter)
Brick19: lab-cnvirt-h01-storage:/bricks/vm_b5_vol/brick
Brick20: lab-cnvirt-h02-storage:/bricks/vm_b5_vol/brick
Brick21: lab-cnvirt-h03-storage:/bricks/vm_b5_arb/brick (arbiter)
Brick22: lab-cnvirt-h03-storage:/bricks/vm_b5_vol/brick
Brick23: lab-cnvirt-h01-storage:/bricks/vm_b6_vol/brick
Brick24: lab-cnvirt-h02-storage:/bricks/vm_b6_arb/brick (arbiter)
Brick25: lab-cnvirt-h02-storage:/bricks/vm_b6_vol/brick
Brick26: lab-cnvirt-h03-storage:/bricks/vm_b6_vol/brick
Brick27: lab-cnvirt-h01-storage:/bricks/vm_b5_arb/brick (arbiter)
Options Reconfigured:
cluster.self-heal-daemon: enable
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
cluster.read-hash-mode: 3
performance.client-io-threads: on
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.fips-mode-rchecksum: on
nfs.disable: on
transport.address-family: inet
I have also tried to set cluster.server-quorum-type: none but with no difference.
Thanks
Marco
On Wed, 19 May 2021 at 07:48, Strahil Nikolov <hunter86_bg@xxxxxxxxx> wrote:
I think that we also have to take a look of the quorum settings.Usually oVirt adds hosts as part of the TSP even if they got no bricks involved in the volume.Can you provide the output of:'gluster pool list''gluster volume info all'Best Regards,Strahil Nikolov
On Wed, May 19, 2021 at 8:31, Ravishankar N<ravishankar@xxxxxxxxxx> wrote:________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users