Hello,
I just upgraded from GlusterFS 3.7.17 to 3.8.6 and somehow messed up my volume by rebooting without shutting things down properly I suppose. Now I have some files which need to be healed (17 files on node 1 to be precise and 0 on node 2). I have a replica 2 by the way.
So in order to correct that I just ran the following command:
# gluster volume heal myvolume
Launching heal operation to perform index self heal on volume myvolume has been successful
Use heal info commands to check status
But still the 17 files are there and nothing happened.
Strangely enough I see the following suspect warning in the GlusterFS self-heal daemon log file (glustershd.log) on both nodes:
[2016-12-11 13:03:13.262962] W [MSGID: 101088] [common-utils.c:3894:gf_backtrace_save] 0-myvolume-replicate-0: Failed to save the backtrace.
Any ideas what could be going wrong here?
Regards,
Regards,
M.
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users