Ahh, thank you, now I get it. I deleted it on one node and it replicated to another one. Now I get the following output:
[root@gluster1 var]# gluster volume heal gv01 info
Brick gluster1:/home/gluster/gv01/
<gfid:d3def9e1-c6d0-4b7d-a322-b5019305182e>
Number of entries: 1
Brick gluster2:/home/gluster/gv01/
Number of entries: 0
[root@gluster1 var]# gluster volume heal gv01 info
Brick gluster1:/home/gluster/gv01/
<gfid:d3def9e1-c6d0-4b7d-a322-b5019305182e>
Number of entries: 1
Brick gluster2:/home/gluster/gv01/
Number of entries: 0
Is it normal? Why the number of entries isn't reset to 0?
And why wouldn't the file show up in split-brain before, anyway?
On Tue, Sep 9, 2014 at 7:46 AM, Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx> wrote:
On 09/09/2014 01:54 AM, Ilya Ivanov wrote:
The deletion needs to happen on one of the bricks, not from the mount point.Hello.I've Gluster 3.5.2 on Centos 6. A primitive replicated volume, as describe here. I tried to simulate split-brain by temporarily disconnecting the nodes and creating a file with the same name and different contents. That worked.
The question is, how do I fix it now? All the tutorials suggest deleting the file from one of the nodes. I can't do that, it reports "Input/output error". The file won't even show up in "gluster volume heal gv00 info split-brain". That shows 0 entries.
Pranith
I can see the file in "gluster volume heal gv00 info heal-failed", though.
--
Ilya.
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users
--
Ilya.
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users