Hi, We have a 4 node glusterfs setup that seems to be running without any problems. We can’t find any problems with replication or whatever. We also have 4 machines running the glusterfs client. On all 4 machines we see the following error in the logs at random moments: [2017-02-23 00:04:33.168778] I [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status] 0-aab-replicate-0: metadata self heal is successfully completed, metadata self heal from source aab-client-0 to aab-client-1,
aab-client-2, aab-client-3, metadata - Pending matrix: [ [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] ], on / [2017-02-23 00:09:34.431089] E [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status] 0-aab-replicate-0: metadata self heal failed, on / [2017-02-23 00:14:34.948975] I [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status] 0-aab-replicate-0: metadata self heal is successfully completed, metadata self heal from source aab-client-0 to aab-client-1,
aab-client-2, aab-client-3, metadata - Pending matrix: [ [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] ], on / The content within the glusterfs filesystems is rather static with only minor changes on it. This “self heal failed” is printed randomly in the logs on the glusterfs client. It’s printed even at moment where nothing
has changed within the glusterfs filesystem. When it is printed, its never on multiple servers at the same time. What we also don’t understand : the error indicates self heal failed on root “/”. In the root of this glusterfs mount there only 2 folders and
no files are ever written at the root level. Any thoughts? |
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users