Hi,
Volume Name: VM_Storage_1
Type: Distributed-Replicate
Volume ID: 1a4e23db-1c98-4d89-b888-b4ae2e0ad5fc
Status: Started
Snapshot Count: 0
Number of Bricks: 9 x (2 + 1) = 27
Transport-type: tcp
Bricks:
Brick1: lab-cnvirt-h01-storage:/bricks/vm_b1_vol/brick
Brick2: lab-cnvirt-h02-storage:/bricks/vm_b1_vol/brick
Brick3: lab-cnvirt-h03-storage:/bricks/vm_b1_arb/brick (arbiter)
Brick4: lab-cnvirt-h03-storage:/bricks/vm_b1_vol/brick
Brick5: lab-cnvirt-h01-storage:/bricks/vm_b2_vol/brick
Brick6: lab-cnvirt-h02-storage:/bricks/vm_b2_arb/brick (arbiter)
Brick7: lab-cnvirt-h02-storage:/bricks/vm_b2_vol/brick
Brick8: lab-cnvirt-h03-storage:/bricks/vm_b2_vol/brick
Brick9: lab-cnvirt-h01-storage:/bricks/vm_b1_arb/brick (arbiter)
Brick10: lab-cnvirt-h01-storage:/bricks/vm_b3_vol/brick
Brick11: lab-cnvirt-h02-storage:/bricks/vm_b3_vol/brick
Brick12: lab-cnvirt-h03-storage:/bricks/vm_b3_arb/brick (arbiter)
Brick13: lab-cnvirt-h03-storage:/bricks/vm_b3_vol/brick
Brick14: lab-cnvirt-h01-storage:/bricks/vm_b4_vol/brick
Brick15: lab-cnvirt-h02-storage:/bricks/vm_b4_arb/brick (arbiter)
Brick16: lab-cnvirt-h02-storage:/bricks/vm_b4_vol/brick
Brick17: lab-cnvirt-h03-storage:/bricks/vm_b4_vol/brick
Brick18: lab-cnvirt-h01-storage:/bricks/vm_b3_arb/brick (arbiter)
Brick19: lab-cnvirt-h01-storage:/bricks/vm_b5_vol/brick
Brick20: lab-cnvirt-h02-storage:/bricks/vm_b5_vol/brick
Brick21: lab-cnvirt-h03-storage:/bricks/vm_b5_arb/brick (arbiter)
Brick22: lab-cnvirt-h03-storage:/bricks/vm_b5_vol/brick
Brick23: lab-cnvirt-h01-storage:/bricks/vm_b6_vol/brick
Brick24: lab-cnvirt-h02-storage:/bricks/vm_b6_arb/brick (arbiter)
Brick25: lab-cnvirt-h02-storage:/bricks/vm_b6_vol/brick
Brick26: lab-cnvirt-h03-storage:/bricks/vm_b6_vol/brick
Brick27: lab-cnvirt-h01-storage:/bricks/vm_b5_arb/brick (arbiter)
Options Reconfigured:
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
cluster.read-hash-mode: 3
performance.client-io-threads: on
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.fips-mode-rchecksum: on
nfs.disable: on
transport.address-family: inet
cluster.self-heal-daemon: enable
I am having significant issues with glustershd with releases 8.4 and 9.1.
My oVirt clusters are using gluster storage backends, and were running fine with Gluster 7.x (shipped with earlier versions of oVirt Node 4.4.x). Recently the oVirt project moved to Gluster 8.4 for the nodes, and hence I have moved to this release when upgrading my clusters.
Since then I am having issues whenever one of the nodes is brought down; when the nodes come back up online the bricks are typically back up and working, but some (random) glustershd processes in the various nodes seem to have issues connecting to some of them.
Typically when this happens the files are not getting healed
VM_Storage_1
Distributed_replicate Started (UP) - 27/27 Bricks Up
Capacity: (27.10% used) 2.00 TiB/8.00 TiB (used/total)
Self-Heal:
lab-cnvirt-h01-storage:/bricks/vm_b1_vol/brick (8 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b1_vol/brick (8 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b2_vol/brick (4 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b2_arb/brick (4 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b2_vol/brick (5 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b1_arb/brick (5 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b3_vol/brick (9 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b3_vol/brick (9 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b4_vol/brick (4 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b4_arb/brick (4 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b4_vol/brick (10 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b3_arb/brick (10 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b5_vol/brick (3 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b5_vol/brick (3 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b6_vol/brick (4 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b6_arb/brick (4 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b6_vol/brick (9 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b5_arb/brick (9 File(s) to heal).
Distributed_replicate Started (UP) - 27/27 Bricks Up
Capacity: (27.10% used) 2.00 TiB/8.00 TiB (used/total)
Self-Heal:
lab-cnvirt-h01-storage:/bricks/vm_b1_vol/brick (8 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b1_vol/brick (8 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b2_vol/brick (4 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b2_arb/brick (4 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b2_vol/brick (5 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b1_arb/brick (5 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b3_vol/brick (9 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b3_vol/brick (9 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b4_vol/brick (4 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b4_arb/brick (4 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b4_vol/brick (10 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b3_arb/brick (10 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b5_vol/brick (3 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b5_vol/brick (3 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b6_vol/brick (4 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b6_arb/brick (4 File(s) to heal).
lab-cnvirt-h02-storage:/bricks/vm_b6_vol/brick (9 File(s) to heal).
lab-cnvirt-h01-storage:/bricks/vm_b5_arb/brick (9 File(s) to heal).
(They will never heal; the number of files to heal however changes).
In the glustershd.log files, I can see the following continuously:
[2021-05-17 10:27:30.531561 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-3: changing port to 49154 (from 0)
[2021-05-17 10:27:30.533709 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-7: changing port to 49155 (from 0)
[2021-05-17 10:27:30.534211 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 2-VM_Storage_1-client-3: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2021-05-17 10:27:30.534514 +0000] W [MSGID: 114043] [client-handshake.c:727:client_setvolume_cbk] 2-VM_Storage_1-client-3: failed to set the volume [{errno=2}, {error=No such file or directory}]
The message "I [MSGID: 114018] [client.c:2229:client_rpc_notify] 2-VM_Storage_1-client-3: disconnected from client, process will keep trying to connect glusterd until brick's port is available [{conn-name=VM_Storage_1-client-3}]" repeated 4 times between [2021-05-17 10:27:18.510668 +0000] and [2021-05-17 10:27:30.534569 +0000]
[2021-05-17 10:27:30.536254 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 2-VM_Storage_1-client-7: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2021-05-17 10:27:30.536620 +0000] W [MSGID: 114043] [client-handshake.c:727:client_setvolume_cbk] 2-VM_Storage_1-client-7: failed to set the volume [{errno=2}, {error=No such file or directory}]
[2021-05-17 10:27:30.536638 +0000] W [MSGID: 114007] [client-handshake.c:752:client_setvolume_cbk] 2-VM_Storage_1-client-7: failed to get from reply dict [{process-uuid}, {errno=22}, {error=Invalid argument}]
[2021-05-17 10:27:30.536651 +0000] E [MSGID: 114044] [client-handshake.c:757:client_setvolume_cbk] 2-VM_Storage_1-client-7: SETVOLUME on remote-host failed [{remote-error=Brick not found}, {errno=2}, {error=No such file or directory}]
[2021-05-17 10:27:30.536660 +0000] I [MSGID: 114051] [client-handshake.c:879:client_setvolume_cbk] 2-VM_Storage_1-client-7: sending CHILD_CONNECTING event []
[2021-05-17 10:27:30.536686 +0000] I [MSGID: 114018] [client.c:2229:client_rpc_notify] 2-VM_Storage_1-client-7: disconnected from client, process will keep trying to connect glusterd until brick's port is available [{conn-name=VM_Storage_1-client-7}]
[2021-05-17 10:27:33.537589 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-3: changing port to 49154 (from 0)
[2021-05-17 10:27:33.539554 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-7: changing port to 49155 (from 0)
[2021-05-17 10:27:30.533709 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-7: changing port to 49155 (from 0)
[2021-05-17 10:27:30.534211 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 2-VM_Storage_1-client-3: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2021-05-17 10:27:30.534514 +0000] W [MSGID: 114043] [client-handshake.c:727:client_setvolume_cbk] 2-VM_Storage_1-client-3: failed to set the volume [{errno=2}, {error=No such file or directory}]
The message "I [MSGID: 114018] [client.c:2229:client_rpc_notify] 2-VM_Storage_1-client-3: disconnected from client, process will keep trying to connect glusterd until brick's port is available [{conn-name=VM_Storage_1-client-3}]" repeated 4 times between [2021-05-17 10:27:18.510668 +0000] and [2021-05-17 10:27:30.534569 +0000]
[2021-05-17 10:27:30.536254 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 2-VM_Storage_1-client-7: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2021-05-17 10:27:30.536620 +0000] W [MSGID: 114043] [client-handshake.c:727:client_setvolume_cbk] 2-VM_Storage_1-client-7: failed to set the volume [{errno=2}, {error=No such file or directory}]
[2021-05-17 10:27:30.536638 +0000] W [MSGID: 114007] [client-handshake.c:752:client_setvolume_cbk] 2-VM_Storage_1-client-7: failed to get from reply dict [{process-uuid}, {errno=22}, {error=Invalid argument}]
[2021-05-17 10:27:30.536651 +0000] E [MSGID: 114044] [client-handshake.c:757:client_setvolume_cbk] 2-VM_Storage_1-client-7: SETVOLUME on remote-host failed [{remote-error=Brick not found}, {errno=2}, {error=No such file or directory}]
[2021-05-17 10:27:30.536660 +0000] I [MSGID: 114051] [client-handshake.c:879:client_setvolume_cbk] 2-VM_Storage_1-client-7: sending CHILD_CONNECTING event []
[2021-05-17 10:27:30.536686 +0000] I [MSGID: 114018] [client.c:2229:client_rpc_notify] 2-VM_Storage_1-client-7: disconnected from client, process will keep trying to connect glusterd until brick's port is available [{conn-name=VM_Storage_1-client-7}]
[2021-05-17 10:27:33.537589 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-3: changing port to 49154 (from 0)
[2021-05-17 10:27:33.539554 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-7: changing port to 49155 (from 0)
From my understanding the process is trying to connect to the brick on the wrong port
lab-cnvirt-h03-storage:-bricks-vm_b2_vol-brick:8:brick-id=VM_Storage_1-client-7
Brick lab-cnvirt-h03-storage:/bricks/vm_b2_vol/brick 49169 0 Y 1600469
lab-cnvirt-h03-storage:-bricks-vm_b1_vol-brick:8:brick-id=VM_Storage_1-client-3
Brick lab-cnvirt-h03-storage:/bricks/vm_b1_vol/brick 49168 0 Y 1600460
Typically to resolve this I have to manually kill the affected glusterfsd process (in this case the two processes above) and then issue a gluster volume start VM_Storage_1 force to restart them.
As soon as I do that, the process is able to re-connect and start the healing:
[2021-05-17 10:46:12.513706 +0000] I [MSGID: 100041] [glusterfsd-mgmt.c:1035:glusterfs_handle_svc_attach] 0-glusterfs: received attach request for volfile [{volfile-id=shd/VM_Storage_1}]
[2021-05-17 10:46:12.513847 +0000] I [MSGID: 100040] [glusterfsd-mgmt.c:109:mgmt_process_volfile] 0-glusterfs: No change in volfile, countinuing []
[2021-05-17 10:46:14.626397 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-7: changing port to 49157 (from 0)
[2021-05-17 10:46:14.628468 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-3: changing port to 49156 (from 0)
[2021-05-17 10:46:14.628927 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 2-VM_Storage_1-client-7: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2021-05-17 10:46:14.629633 +0000] I [MSGID: 114046] [client-handshake.c:857:client_setvolume_cbk] 2-VM_Storage_1-client-7: Connected, attached to remote volume [{conn-name=VM_Storage_1-client-7}, {remote_subvol=/bricks/vm_b2_vol/brick}]
[2021-05-17 10:46:14.631212 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 2-VM_Storage_1-client-3: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2021-05-17 10:46:14.631949 +0000] I [MSGID: 114046] [client-handshake.c:857:client_setvolume_cbk] 2-VM_Storage_1-client-3: Connected, attached to remote volume [{conn-name=VM_Storage_1-client-3}, {remote_subvol=/bricks/vm_b1_vol/brick}]
[2021-05-17 10:46:14.705116 +0000] I [MSGID: 108026] [afr-self-heal-data.c:347:afr_selfheal_data_do] 2-VM_Storage_1-replicate-2: performing data selfheal on 399bdfe5-01b7-46f9-902b-9351420debc9
[2021-05-17 10:46:14.705214 +0000] I [MSGID: 108026] [afr-self-heal-data.c:347:afr_selfheal_data_do] 2-VM_Storage_1-replicate-2: performing data selfheal on 3543a4c7-4a68-4193-928a-c9f7ef08ce4e
[2021-05-17 10:46:12.513847 +0000] I [MSGID: 100040] [glusterfsd-mgmt.c:109:mgmt_process_volfile] 0-glusterfs: No change in volfile, countinuing []
[2021-05-17 10:46:14.626397 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-7: changing port to 49157 (from 0)
[2021-05-17 10:46:14.628468 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 2-VM_Storage_1-client-3: changing port to 49156 (from 0)
[2021-05-17 10:46:14.628927 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 2-VM_Storage_1-client-7: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2021-05-17 10:46:14.629633 +0000] I [MSGID: 114046] [client-handshake.c:857:client_setvolume_cbk] 2-VM_Storage_1-client-7: Connected, attached to remote volume [{conn-name=VM_Storage_1-client-7}, {remote_subvol=/bricks/vm_b2_vol/brick}]
[2021-05-17 10:46:14.631212 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 2-VM_Storage_1-client-3: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2021-05-17 10:46:14.631949 +0000] I [MSGID: 114046] [client-handshake.c:857:client_setvolume_cbk] 2-VM_Storage_1-client-3: Connected, attached to remote volume [{conn-name=VM_Storage_1-client-3}, {remote_subvol=/bricks/vm_b1_vol/brick}]
[2021-05-17 10:46:14.705116 +0000] I [MSGID: 108026] [afr-self-heal-data.c:347:afr_selfheal_data_do] 2-VM_Storage_1-replicate-2: performing data selfheal on 399bdfe5-01b7-46f9-902b-9351420debc9
[2021-05-17 10:46:14.705214 +0000] I [MSGID: 108026] [afr-self-heal-data.c:347:afr_selfheal_data_do] 2-VM_Storage_1-replicate-2: performing data selfheal on 3543a4c7-4a68-4193-928a-c9f7ef08ce4e
Am I doing something wrong here?
I have also tried to upgrade to 9.1 in my test cluster (logs are from the 9.1) -- but I have the exact same issue.
Do you need any specific information?
It is happening with all my volumes, but info for the one above is listed below:
Volume Name: VM_Storage_1
Type: Distributed-Replicate
Volume ID: 1a4e23db-1c98-4d89-b888-b4ae2e0ad5fc
Status: Started
Snapshot Count: 0
Number of Bricks: 9 x (2 + 1) = 27
Transport-type: tcp
Bricks:
Brick1: lab-cnvirt-h01-storage:/bricks/vm_b1_vol/brick
Brick2: lab-cnvirt-h02-storage:/bricks/vm_b1_vol/brick
Brick3: lab-cnvirt-h03-storage:/bricks/vm_b1_arb/brick (arbiter)
Brick4: lab-cnvirt-h03-storage:/bricks/vm_b1_vol/brick
Brick5: lab-cnvirt-h01-storage:/bricks/vm_b2_vol/brick
Brick6: lab-cnvirt-h02-storage:/bricks/vm_b2_arb/brick (arbiter)
Brick7: lab-cnvirt-h02-storage:/bricks/vm_b2_vol/brick
Brick8: lab-cnvirt-h03-storage:/bricks/vm_b2_vol/brick
Brick9: lab-cnvirt-h01-storage:/bricks/vm_b1_arb/brick (arbiter)
Brick10: lab-cnvirt-h01-storage:/bricks/vm_b3_vol/brick
Brick11: lab-cnvirt-h02-storage:/bricks/vm_b3_vol/brick
Brick12: lab-cnvirt-h03-storage:/bricks/vm_b3_arb/brick (arbiter)
Brick13: lab-cnvirt-h03-storage:/bricks/vm_b3_vol/brick
Brick14: lab-cnvirt-h01-storage:/bricks/vm_b4_vol/brick
Brick15: lab-cnvirt-h02-storage:/bricks/vm_b4_arb/brick (arbiter)
Brick16: lab-cnvirt-h02-storage:/bricks/vm_b4_vol/brick
Brick17: lab-cnvirt-h03-storage:/bricks/vm_b4_vol/brick
Brick18: lab-cnvirt-h01-storage:/bricks/vm_b3_arb/brick (arbiter)
Brick19: lab-cnvirt-h01-storage:/bricks/vm_b5_vol/brick
Brick20: lab-cnvirt-h02-storage:/bricks/vm_b5_vol/brick
Brick21: lab-cnvirt-h03-storage:/bricks/vm_b5_arb/brick (arbiter)
Brick22: lab-cnvirt-h03-storage:/bricks/vm_b5_vol/brick
Brick23: lab-cnvirt-h01-storage:/bricks/vm_b6_vol/brick
Brick24: lab-cnvirt-h02-storage:/bricks/vm_b6_arb/brick (arbiter)
Brick25: lab-cnvirt-h02-storage:/bricks/vm_b6_vol/brick
Brick26: lab-cnvirt-h03-storage:/bricks/vm_b6_vol/brick
Brick27: lab-cnvirt-h01-storage:/bricks/vm_b5_arb/brick (arbiter)
Options Reconfigured:
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
cluster.read-hash-mode: 3
performance.client-io-threads: on
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.fips-mode-rchecksum: on
nfs.disable: on
transport.address-family: inet
cluster.self-heal-daemon: enable
Regards,
Marco
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users