Re: File\Directory not healing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've touched the directory one level above the directory with the I\O issue as the one above that is the one showing as dirty.
It hasn't healed. Should the self heal daemon automatically kick in here?

Is there anything else I can do?

Thanks
David

On Tue, 14 Feb 2023 at 07:03, Strahil Nikolov <hunter86_bg@xxxxxxxxx> wrote:
You can always mount it locally on any of the gluster nodes.

Best Regards,
Strahil Nikolov 

On Mon, Feb 13, 2023 at 18:13, David Dolan
HI Strahil,

Thanks for that. It's the first time I've been in this position, so I'm learning as I go along.

Unfortunately I can't go into the directory on the client side as I get an input/output error
Input/output error
d????????? ? ?      ?        ?            ? 01

Thanks
David


On Sun, 12 Feb 2023 at 20:29, Strahil Nikolov <hunter86_bg@xxxxxxxxx> wrote:
Setting blame on client-1 and client-2 will make a bigger mess.
Can't you touch the affected file from the FUSE mount point ?

Best Regards,
Strahil Nikolov 

On Tue, Feb 7, 2023 at 14:42, David Dolan
Hi All. 

Hoping you can help me with a healing problem. I have one file which didn't self heal.
it looks to be a problem with a directory in the path as one node says it's dirty. I have a replica volume with arbiter
This is what the 3 nodes say. One brick on each
Node1
getfattr -d -m . -e hex /path/to/dir | grep afr
getfattr: Removing leading '/' from absolute path names
trusted.afr.volume-client-2=0x000000000000000000000001
trusted.afr.dirty=0x000000000000000000000000

Node2
getfattr -d -m . -e hex /path/to/dir | grep afr
getfattr: Removing leading '/' from absolute path names
trusted.afr.volume-client-2=0x000000000000000000000001
trusted.afr.dirty=0x000000000000000000000000

Node3(Arbiter)
getfattr -d -m . -e hex /path/to/dir | grep afr
getfattr: Removing leading '/' from absolute path names
trusted.afr.dirty=0x000000000000000000000001
Since Node3(the arbiter) sees it as dirty and it looks like Node 1 and Node 2 have good copies, I was thinking of running the following on Node1 which I believe would tell Node 2 and Node 3 to sync from Node 1
I'd then kick off a heal on the volume
setfattr -n trusted.afr.volume-client-1 -v 0x000000010000000000000000 /path/to/dir
setfattr -n trusted.afr.volume-client-2 -v 0x000000010000000000000000 /path/to/dir
client-0 is node 1, client-1 is node2 and client-2 is node 3. I've verified the hard links with gfid are in the xattrop directory
Is this the correct way to heal and resolve the issue? 

Thanks
David
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux