https://docs.gluster.org/en/v3/Troubleshooting/resolving-splitbrain/
Hopefully the link above will help you fix it.
Diego
On Wed, May 9, 2018, 21:53 Thing <thing.thing@xxxxxxxxx> wrote:
[trying to read,I cant understand what is wrong?root@glusterp1 gv0]# gluster volume heal gv0 infoBrick glusterp1:/bricks/brick1/gv0<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brainStatus: ConnectedNumber of entries: 1Brick glusterp2:/bricks/brick1/gv0<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brainStatus: ConnectedNumber of entries: 1Brick glusterp3:/bricks/brick1/gv0<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brainStatus: ConnectedNumber of entries: 1[root@glusterp1 gv0]# getfattr -d -m . -e hex /bricks/brick1/gv0getfattr: Removing leading '/' from absolute path names# file: bricks/brick1/gv0security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000trusted.gfid=0x00000000000000000000000000000001trusted.glusterfs.dht=0x000000010000000000000000fffffffftrusted.glusterfs.volume-id=0xcfceb3535f0e4cf18b533ccfb1f091d3[root@glusterp1 gv0]# gluster volume info volVolume vol does not exist[root@glusterp1 gv0]# gluster volume info gv0Volume Name: gv0Type: ReplicateVolume ID: cfceb353-5f0e-4cf1-8b53-3ccfb1f091d3Status: StartedSnapshot Count: 0Number of Bricks: 1 x 3 = 3Transport-type: tcpBricks:Brick1: glusterp1:/bricks/brick1/gv0Brick2: glusterp2:/bricks/brick1/gv0Brick3: glusterp3:/bricks/brick1/gv0Options Reconfigured:performance.client-io-threads: offnfs.disable: ontransport.address-family: inet[root@glusterp1 gv0]#================[root@glusterp2 gv0]# getfattr -d -m . -e hex /bricks/brick1/gv0getfattr: Removing leading '/' from absolute path names# file: bricks/brick1/gv0security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000trusted.gfid=0x00000000000000000000000000000001trusted.glusterfs.dht=0x000000010000000000000000fffffffftrusted.glusterfs.volume-id=0xcfceb3535f0e4cf18b533ccfb1f091d3[root@glusterp2 gv0]#================[root@glusterp3 isos]# getfattr -d -m . -e hex /bricks/brick1/gv0getfattr: Removing leading '/' from absolute path names# file: bricks/brick1/gv0security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000trusted.gfid=0x00000000000000000000000000000001trusted.glusterfs.dht=0x000000010000000000000000fffffffftrusted.glusterfs.volume-id=0xcfceb3535f0e4cf18b533ccfb1f091d3[root@glusterp3 isos]#_______________________________________________On 10 May 2018 at 13:22, Thing <thing.thing@xxxxxxxxx> wrote:Whatever repair happened has now finished but I still have this,I cant find anything so far telling me how to fix it. Looking atI cant determine what file? dir gvo? is actually the issue.[root@glusterp1 gv0]# gluster volume heal gv0 info split-brainBrick glusterp1:/bricks/brick1/gv0<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>Status: ConnectedNumber of entries in split-brain: 1Brick glusterp2:/bricks/brick1/gv0<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>Status: ConnectedNumber of entries in split-brain: 1Brick glusterp3:/bricks/brick1/gv0<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>Status: ConnectedNumber of entries in split-brain: 1[root@glusterp1 gv0]#On 10 May 2018 at 12:22, Thing <thing.thing@xxxxxxxxx> wrote:also I have this "split brain"?[root@glusterp1 gv0]# gluster volume heal gv0 infoBrick glusterp1:/bricks/brick1/gv0<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brainStatus: ConnectedNumber of entries: 1Brick glusterp2:/bricks/brick1/gv0<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain/glusterp1/images/centos-server-001.qcow2/glusterp1/images/kubernetes-template.qcow2/glusterp1/images/kworker01.qcow2/glusterp1/images/kworker02.qcow2Status: ConnectedNumber of entries: 5Brick glusterp3:/bricks/brick1/gv0<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain/glusterp1/images/centos-server-001.qcow2/glusterp1/images/kubernetes-template.qcow2/glusterp1/images/kworker01.qcow2/glusterp1/images/kworker02.qcow2Status: ConnectedNumber of entries: 5[root@glusterp1 gv0]#On 10 May 2018 at 12:20, Thing <thing.thing@xxxxxxxxx> wrote:[root@glusterp1 gv0]# !737gluster v statusStatus of volume: gv0Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------------------------Brick glusterp1:/bricks/brick1/gv0 49152 0 Y 5229Brick glusterp2:/bricks/brick1/gv0 49152 0 Y 2054Brick glusterp3:/bricks/brick1/gv0 49152 0 Y 2110Self-heal Daemon on localhost N/A N/A Y 5219Self-heal Daemon on glusterp2 N/A N/A Y 1943Self-heal Daemon on glusterp3 N/A N/A Y 2067Task Status of Volume gv0------------------------------------------------------------------------------There are no active volume tasks[root@glusterp1 gv0]# ls -l glusterp1/images/total 2877064-rw-------. 2 root root 107390828544 May 10 12:18 centos-server-001.qcow2-rw-r--r--. 2 root root 0 May 8 14:32 file1-rw-r--r--. 2 root root 0 May 9 14:41 file1-1-rw-------. 2 root root 85912715264 May 10 12:18 kubernetes-template.qcow2-rw-------. 2 root root 0 May 10 12:08 kworker01.qcow2-rw-------. 2 root root 0 May 10 12:08 kworker02.qcow2[root@glusterp1 gv0]#while,[root@glusterp2 gv0]# ls -l glusterp1/images/total 11209084-rw-------. 2 root root 107390828544 May 9 14:45 centos-server-001.qcow2-rw-r--r--. 2 root root 0 May 8 14:32 file1-rw-r--r--. 2 root root 0 May 9 14:41 file1-1-rw-------. 2 root root 85912715264 May 9 15:59 kubernetes-template.qcow2-rw-------. 2 root root 3792371712 May 9 16:15 kworker01.qcow2-rw-------. 2 root root 3792371712 May 10 11:20 kworker02.qcow2[root@glusterp2 gv0]#So some files have re-synced but not the kworker machines network activity has stopped.On 10 May 2018 at 12:05, Diego Remolina <dijuremo@xxxxxxxxx> wrote:Show us output from: gluster v statusIt should be easy to fix. Stop gluster daemon on that node, mount the brick, start gluster daemon again.Check: gluster v statusDoes it show the brick up?HTH,DiegoOn Wed, May 9, 2018, 20:01 Thing <thing.thing@xxxxxxxxx> wrote:_______________________________________________Hi,I have 3 Centos7.4 machines setup as a 3 way raid 1.Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt mount on boot and as a result its empty.Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3 /bricks/brick1/gv0 as expected.Is there a way to get glusterp1's gv0 to sync off the other 2? there must be but,I have looked at the gluster docs and I cant find anything about repairing resyncing?Where am I meant to look for such info?thanks
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users