Lalatendu Mohanty <lmohanty at redhat.com> writes: > I am just curious about what does " gluster v heal <volumeName> info > split-brain" returns when you see this issue? "Number of entries: 0" every time. Here's a test I did today, across the same four virtual machines with replica 2 and a fifth virtual machine as a native glusterfs client: * "shutdown -h now" on server node 02 * On the client: # rm -Rf linux-3.12 * wait 30 seconds, and boot up server node 02 * wait until this appears on the client: rm: cannot remove `linux-3.12/arch/powerpc/platforms/52xx': Directory not empty * "heal info split-brain" and "heal info" on node 01, in full: ----- root at ovvm01:~# gluster v heal ovvmvol0 info split-brain Gathering Heal info on volume ovvmvol0 has been successful Brick ovvm01.itea.ntnu.no:/export/sdb1/brick Number of entries: 0 Brick ovvm02.itea.ntnu.no:/export/sdb1/brick Number of entries: 0 Brick ovvm03.itea.ntnu.no:/export/sdb1/brick Number of entries: 0 Brick ovvm04.itea.ntnu.no:/export/sdb1/brick Number of entries: 0 root at ovvm01:~# gluster v heal ovvmvol0 info Gathering Heal info on volume ovvmvol0 has been successful Brick ovvm01.itea.ntnu.no:/export/sdb1/brick Number of entries: 4 /linux-3.12 /linux-3.12/arch /linux-3.12/arch/powerpc /linux-3.12/arch/powerpc/platforms Brick ovvm02.itea.ntnu.no:/export/sdb1/brick Number of entries: 1 <gfid:6ec5ceae-14fa-4d02-8f1e-d3c362860557>/52xx/Kconfig Brick ovvm03.itea.ntnu.no:/export/sdb1/brick Number of entries: 0 Brick ovvm04.itea.ntnu.no:/export/sdb1/brick Number of entries: 0 ----- * Verify on the client, after rm has finished: # find linux-3.12 linux-3.12 linux-3.12/arch linux-3.12/arch/powerpc linux-3.12/arch/powerpc/platforms linux-3.12/arch/powerpc/platforms/52xx linux-3.12/arch/powerpc/platforms/52xx/Kconfig ?ystein