Re: GlusterFS 9.3 - Replicate Volume (2 Bricks / 1 Arbiter) - Self-healing does not always work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Fri, Oct 29, 2021 at 12:28 PM Thorsten Walk <darkiop@xxxxxxxxx> wrote:

After a certain time it always comes to the state that there are not healable files in the GFS (in the example below: <gfid:26c5396c-86ff-408d-9cda-106acd2b0768>).

Currently I have the GlusterFS volume in test mode and only 1-2 VMs running on it. So far there are no negative effects. The replication and the selfheal basically work, only now and then something remains that cannot be healed.

Does anyone have an idea how to prevent or heal this? I have already completely rebuilt the volume incl. partitions and glusterd to exclude old loads.

If you need more information, please contact me.


The next time this occurs, can you check if disabling `cluster.eager-lock`  helps heal the file?  Also share the xattrs (eg.`getfattr -d -m. -e hex  /brick-path/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768 ` ) output from all 3 bricks for the file or its gfid.

Regards,
Ravi

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux