On Wed, Jul 13, 2016 at 10:10 AM, Dmitry Melekhov <dm@xxxxxxxxxx> wrote:
13.07.2016 08:36, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 9:35 AM, Dmitry Melekhov <dm@xxxxxxxxxx> wrote:
13.07.2016 01:52, Anuradha Talur пишет:
----- Original Message -----
From: "Dmitry Melekhov" <dm@xxxxxxxxxx>If you are setting the file length to 0 on one of the bricks (looks like
To: "Pranith Kumar Karampuri" <pkarampu@xxxxxxxxxx>
Cc: "gluster-users" <gluster-users@xxxxxxxxxxx>
Sent: Tuesday, July 12, 2016 9:27:17 PM
Subject: Re: 3.7.13, index healing broken?
12.07.2016 17:39, Pranith Kumar Karampuri пишет:
Wow, what are the steps to recreate the problem?
just set file length to zero, always reproducible.
that is the case), it is not a bug.
Index heal relies on failures seen from the mount point(s)
to identify the files that need heal. It won't be able to recognize any file
modification done directly on bricks. Same goes for heal info command which
is the reason heal info also shows 0 entries.
Well, this makes self-heal useless then- if any file is accidently corrupted or deleted (yes! if file is deleted directly from brick this is no recognized by idex heal too), then it will not be self-healed, because self-heal uses index heal.
It is better to look into bit-rot feature if you want to guard against these kinds of problems.
Bit rot detects bit problems, not missing files or their wrong length, i.e. this is overhead for such simple task.
It detects wrong length. Because checksum won't match anymore.
What use-case you are trying out is leading to changing things directly on the brick?
Thank you!
OK, thank yo for explanation, but , once again how about self-healing and data consistency?
Heal full on the other hand will individually compare certain aspects of all
files/dir to identify files to be healed. This is why heal full works in this case
but index heal doesn't.
And, if I access this deleted or broken file from client then it will be healed, I guess this is what self-heal needs to do.
Thank you!
On Tue, Jul 12, 2016 at 3:09 PM, Dmitry Melekhov < dm@xxxxxxxxxx > wrote:
12.07.2016 13:33, Pranith Kumar Karampuri пишет:
What was "gluster volume heal <volname> info" showing when you saw this
issue?
just reproduced :
[root@father brick]# > gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]# gluster volume heal pool
Launching heal operation to perform index self heal on volume pool has been
successful
Use heal info commands to check status
[root@father brick]# gluster volume heal pool info
Brick father:/wall/pool/brick
Status: Connected
Number of entries: 0
Brick son:/wall/pool/brick
Status: Connected
Number of entries: 0
Brick spirit:/wall/pool/brick
Status: Connected
Number of entries: 0
[root@father brick]#
On Mon, Jul 11, 2016 at 3:28 PM, Dmitry Melekhov < dm@xxxxxxxxxx > wrote:
Hello!
3.7.13, 3 bricks volume.
inside one of bricks:
[root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
-rw-r--r-- 2 root root 52268 июл 11 13:00 gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]#
[root@father brick]# > gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
-rw-r--r-- 2 root root 0 июл 11 13:54 gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]#
so now file has 0 length.
try to heal:
[root@father brick]# gluster volume heal pool
Launching heal operation to perform index self heal on volume pool has been
successful
Use heal info commands to check status
[root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
-rw-r--r-- 2 root root 0 июл 11 13:54 gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]#
nothing!
[root@father brick]# gluster volume heal pool full
Launching heal operation to perform full self heal on volume pool has been
successful
Use heal info commands to check status
[root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
-rw-r--r-- 2 root root 52268 июл 11 13:00 gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]#
full heal is OK.
But, self-heal is doing index heal according to
http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/afr-self-heal-daemon/
Is this bug?
As far as I remember it worked in 3.7.10....
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
--
Pranith
--
Pranith
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
--
Pranith
--
Pranith
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users