Bugs should be filed at https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS On 11/11/2013 11:24 PM, ?ystein Viggen wrote: > Lalatendu Mohanty <lmohanty at redhat.com> writes: > >> It sounds like a split brain issue. Below mentioned commands will help >> you to figure this out. >> >> gluster v heal <volumeName> info split-brain >> gluster v heal <volumeName> info heal-failed >> >> If you see any split-brain , then it is a bug. We can check with >> gluster-devel if it is fixed in the master branch or there is bug for >> it in bugzilla. > Thank you for your reply. > > I've repeated a similar test on my two node cluster like this: > > 1. shut down node 02 > 2. on the client run "rm -Rf linux-3.12/" > 3. while the rm is running, boot up node 02 > > That has the following interesting results: > > On the client: > > rm: cannot remove `linux-3.12/arch/mips/netlogic/dts': Directory not > empty > > On the servers: > > "gluster v heal ovvmvol0 info split brain" and "gluster v heal ovvmvol0 > info heal-failed" both show 0 entries. > > It also claims to have healed some files: > ----- > # gluster v heal ovvmvol0 info healed > Gathering Heal info on volume ovvmvol0 has been successful > > Brick ovvm01.itea.ntnu.no:/export/sdb1/brick > Number of entries: 4 > at path on brick > ----------------------------------- > 2013-11-11 13:49:32 /linux-3.12/arch/mips/netlogic > 2013-11-11 13:49:32 /linux-3.12/arch/mips > 2013-11-11 13:49:30 /linux-3.12/arch > 2013-11-11 13:49:29 /linux-3.12 > > Brick ovvm02.itea.ntnu.no:/export/sdb1/brick > Number of entries: 3 > at path on brick > ----------------------------------- > 2013-11-11 13:49:29 > <gfid:2febb3e3-b72f-47f0-a6f6-cbec70d8874c>/dts/xlp_fvp.dts > 2013-11-11 13:49:29 > <gfid:2febb3e3-b72f-47f0-a6f6-cbec70d8874c>/dts/xlp_evp.dts > 2013-11-11 13:49:29 > <gfid:2febb3e3-b72f-47f0-a6f6-cbec70d8874c>/dts/Makefile > ----- > > On the client, these three files in linux-3.12/arch/mips/netlogic/dts/ > are indeed shown as present. > > > Still curious if this was a somehow a quorum issue, I added two more > servers, for a total of four servers with one brick each. Still > replicate 2. I set cluster.server-quorum-type=server and > cluster.server-quorum-ratio=51%. > > I repeated the experiment of shutting down node 02, starting an rm -Rf > on a client, and booting up node 02 again. This time, it healed > seemingly half the linux-3.12/arch/x86/include/asm/ directory. As one > might expect, the directory is completely empty on bricks 03 and 04, > while bricks 01 and 02 share the same files. > > > Should I file a bug about this somewhere? It seems easy enough to > replicate. > > ?ystein > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users