----- Original Message ----- > From: "Dmitry Melekhov" <dm@xxxxxxxxxx> > To: "Pranith Kumar Karampuri" <pkarampu@xxxxxxxxxx> > Cc: "gluster-users" <gluster-users@xxxxxxxxxxx> > Sent: Tuesday, July 12, 2016 9:27:17 PM > Subject: Re: 3.7.13, index healing broken? > > > > 12.07.2016 17:39, Pranith Kumar Karampuri пишет: > > > > Wow, what are the steps to recreate the problem? > > just set file length to zero, always reproducible. > If you are setting the file length to 0 on one of the bricks (looks like that is the case), it is not a bug. Index heal relies on failures seen from the mount point(s) to identify the files that need heal. It won't be able to recognize any file modification done directly on bricks. Same goes for heal info command which is the reason heal info also shows 0 entries. Heal full on the other hand will individually compare certain aspects of all files/dir to identify files to be healed. This is why heal full works in this case but index heal doesn't. > > > > > On Tue, Jul 12, 2016 at 3:09 PM, Dmitry Melekhov < dm@xxxxxxxxxx > wrote: > > > > 12.07.2016 13:33, Pranith Kumar Karampuri пишет: > > > > What was "gluster volume heal <volname> info" showing when you saw this > issue? > > just reproduced : > > > [root@father brick]# > gstatus-0.64-3.el7.x86_64.rpm > > [root@father brick]# gluster volume heal pool > Launching heal operation to perform index self heal on volume pool has been > successful > Use heal info commands to check status > [root@father brick]# gluster volume heal pool info > Brick father:/wall/pool/brick > Status: Connected > Number of entries: 0 > > Brick son:/wall/pool/brick > Status: Connected > Number of entries: 0 > > Brick spirit:/wall/pool/brick > Status: Connected > Number of entries: 0 > > [root@father brick]# > > > > > > > > On Mon, Jul 11, 2016 at 3:28 PM, Dmitry Melekhov < dm@xxxxxxxxxx > wrote: > > > Hello! > > 3.7.13, 3 bricks volume. > > inside one of bricks: > > [root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm > -rw-r--r-- 2 root root 52268 июл 11 13:00 gstatus-0.64-3.el7.x86_64.rpm > [root@father brick]# > > > [root@father brick]# > gstatus-0.64-3.el7.x86_64.rpm > [root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm > -rw-r--r-- 2 root root 0 июл 11 13:54 gstatus-0.64-3.el7.x86_64.rpm > [root@father brick]# > > so now file has 0 length. > > try to heal: > > > > [root@father brick]# gluster volume heal pool > Launching heal operation to perform index self heal on volume pool has been > successful > Use heal info commands to check status > [root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm > -rw-r--r-- 2 root root 0 июл 11 13:54 gstatus-0.64-3.el7.x86_64.rpm > [root@father brick]# > > > nothing! > > [root@father brick]# gluster volume heal pool full > Launching heal operation to perform full self heal on volume pool has been > successful > Use heal info commands to check status > [root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm > -rw-r--r-- 2 root root 52268 июл 11 13:00 gstatus-0.64-3.el7.x86_64.rpm > [root@father brick]# > > > full heal is OK. > > But, self-heal is doing index heal according to > > http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/afr-self-heal-daemon/ > > Is this bug? > > > As far as I remember it worked in 3.7.10.... > > > _______________________________________________ > Gluster-users mailing list > Gluster-users@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-users > > > > -- > Pranith > > > > > -- > Pranith > > > _______________________________________________ > Gluster-users mailing list > Gluster-users@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-users -- Thanks, Anuradha. _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users