Hi I have been running 4 nodes in a distributed,replicated setup on gluster 3.3.1 since January. Each node has 10tb of storage to give a total of 20tb, for 2-3 years before that they were running on previous versions of gluster. Recently we had issues with the backend storage (ext4) on one of the nodes going read only, thats now resolved and I have ran the following and had errors. gluster volume heal repository info split-brain This is showing the following and has around 1041 lines. 2013-06-20 06:34:05 /shareddocuments/Product Media/images/big/w/7355.jpg 2013-06-20 06:34:05 <gfid:929454d9-d3f2-44cb-a96a-5ffdc9ed538a> 2013-06-20 06:34:05 <gfid:4de362d1-8b91-4be2-8900-5b727e612912> 2013-06-20 06:34:05 <gfid:eb57c8a4-8f22-470d-bfa8-f6780d57207a> 2013-06-20 06:34:05 <gfid:55722221-66c3-4950-88cb-b8eeeb6951b8> 2013-06-20 06:34:05 <gfid:3597bbd5-29ec-4414-9b6c-c386e1b4334e> 2013-06-20 06:34:05 <gfid:d85a1dcd-f5bb-4dcc-83cc-73badd44d1d9> 2013-06-20 06:34:05 <gfid:95ee87d4-18b2-4ad7-8d11-bcc93ad745a7> 2013-06-20 06:34:05 <gfid:b9fa7988-8194-4520-9660-883e5d8d8fae> 2013-06-20 06:34:05 <gfid:06fac327-0e4a-4b88-96af-d34b78a7c356> 2013-06-20 06:34:05 <gfid:e2a4f473-f46a-47aa-a4a1-fd7b748ed862> 2013-06-20 06:34:05 <gfid:c6e4104e-0b34-4be7-aed9-29fa30d96517> 2013-06-20 06:34:05 /shareddocuments/Product Media/images/big/a/40096.jpg 2013-06-20 06:34:05 <gfid:8bc3bce9-4df3-4d36-9a46-0710314dccdc> 2013-06-20 06:34:05 <gfid:59231e11-6ec0-4e80-9c2b-13ffa0a91e79> 2013-06-20 06:34:05 <gfid:20763d19-b856-4143-b698-70cf7f4b5ffb> 2013-06-20 06:34:05 <gfid:73560046-745e-452f-8e9e-842753289e60> 2013-06-20 06:34:05 <gfid:d3937b80-c577-4105-8fad-d1b81448ef15> 2013-06-20 06:34:05 <gfid:fca05aec-4ac0-4c3e-be04-79b781f76073> 2013-06-20 06:34:05 <gfid:d3dac544-7647-4157-afa6-2fac383d8e07> 2013-06-20 06:34:05 /shareddocuments/Product Media/images/big/b/Thumbs.db 2013-06-20 06:34:05 <gfid:20e9153a-95ef-4a16-9e97-f2035fe4bed4> 2013-06-20 06:34:05 <gfid:0bd5b378-564c-47bc-88e4-42faa662c619> 2013-06-20 06:34:05 <gfid:828b70e1-e830-45d5-9762-8930f7a9532b> 2013-06-20 06:34:05 <gfid:368f8b86-0896-4c5a-9183-c4a32f9beb19> 2013-06-20 06:34:05 <gfid:55628d66-c03d-48e4-a060-4236e1265e3b> 2013-06-20 06:34:05 <gfid:936088bb-4d00-49c8-b47a-36de7b2ebc23> I know you can do the following on the node with the bad copy of the file to heal split-brain problems. BRICK=/data/testvol/brick1 SBFILE=/foo/bar GFID=$(getfattr -n trusted.gfid --absolute-names -e hex ${BRICK}${SBFILE} | grep 0x | cut -d'x' -f2) rm -f ${BRICK}${SBFILE} rm -f ${BRICK}/.glusterfs/${GFID:0:2}/${GFID:2:2}/${GFID:0:8}-${GFID:8:4}-${GFID:12:4}-${GFID:16:4}-${GFID:20:12} Is this a normal output as most of the rows show a gfid and only a few show a path/filename which has split brain? Nick