Hi folks,
my gluster volume isn't fully healing. We had an outage couple days ago
and all other files got healed successfully. Now - days later - i can
see there are still two gfid's per node remaining in healing list.
root@storage-001~# for i in `gluster volume list`; do gluster volume
heal $i info; done
Brick storage-003.mydomain.com:/mnt/bricks/g-volume-myvolume
<gfid:612ebae7-3df2-467f-aa02-47d9e3bafc1a>
<gfid:876597cd-702a-49ec-a9ed-46d21f90f754>
Status: Connected
Number of entries: 2
Brick storage-002.mydomain.com:/mnt/bricks/g-volume-myvolume
<gfid:a4babc5a-bd5a-4429-b65e-758651d5727c>
<gfid:48791313-e5e7-44df-bf99-3ebc8d4cf5d5>
Status: Connected
Number of entries: 2
Brick storage-001.mydomain.com:/mnt/bricks/g-volume-myvolume
<gfid:a4babc5a-bd5a-4429-b65e-758651d5727c>
<gfid:48791313-e5e7-44df-bf99-3ebc8d4cf5d5>
Status: Connected
Number of entries: 2
In the log i can see that the glustershd process is invoked to heal the
reamining files but fails with "remote operation failed".
[2022-09-14 10:56:50.007978 +0000] I [MSGID: 108026]
[afr-self-heal-entry.c:1053:afr_selfheal_entry_do]
0-g-volume-myvolume-replicate-0: performing entry selfheal on
48791313-e5e7-44df-bf99-3ebc8d4cf5d5
[2022-09-14 10:56:50.008428 +0000] I [MSGID: 108026]
[afr-self-heal-entry.c:1053:afr_selfheal_entry_do]
0-g-volume-myvolume-replicate-0: performing entry selfheal on
a4babc5a-bd5a-4429-b65e-758651d5727c
[2022-09-14 10:56:50.015005 +0000] E [MSGID: 114031]
[client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
0-g-volume-myvolume-client-2: remote operation failed. [{path=(null)},
{errno=22}, {error=Invalid argument}]
[2022-09-14 10:56:50.015007 +0000] E [MSGID: 114031]
[client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
0-g-volume-myvolume-client-3: remote operation failed. [{path=(null)},
{errno=22}, {error=Invalid argument}]
[2022-09-14 10:56:50.015138 +0000] E [MSGID: 114031]
[client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
0-g-volume-myvolume-client-4: remote operation failed. [{path=(null)},
{errno=22}, {error=Invalid argument}]
[2022-09-14 10:56:50.614082 +0000] E [MSGID: 114031]
[client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
0-g-volume-myvolume-client-2: remote operation failed. [{path=(null)},
{errno=22}, {error=Invalid argument}]
[2022-09-14 10:56:50.614108 +0000] E [MSGID: 114031]
[client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
0-g-volume-myvolume-client-3: remote operation failed. [{path=(null)},
{errno=22}, {error=Invalid argument}]
[2022-09-14 10:56:50.614099 +0000] E [MSGID: 114031]
[client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
0-g-volume-myvolume-client-4: remote operation failed. [{path=(null)},
{errno=22}, {error=Invalid argument}]
[2022-09-14 10:56:51.619623 +0000] E [MSGID: 114031]
[client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
0-g-volume-myvolume-client-2: remote operation failed. [{path=(null)},
{errno=22}, {error=Invalid argument}]
[2022-09-14 10:56:51.619630 +0000] E [MSGID: 114031]
[client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
0-g-volume-myvolume-client-3: remote operation failed. [{path=(null)},
{errno=22}, {error=Invalid argument}]
[2022-09-14 10:56:51.619632 +0000] E [MSGID: 114031]
[client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
0-g-volume-myvolume-client-4: remote operation failed. [{path=(null)},
{errno=22}, {error=Invalid argument}]
The gluster is running with opversion 90000 on CentOS. There are no
entries in split brain.
How can i get these files finally healed?
Thanks in advance.
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users