also I have this "split brain"?
[root@glusterp1 gv0]# gluster volume heal gv0 info
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
/glusterp1/images/centos-server-001.qcow2
/glusterp1/images/kubernetes-template.qcow2
/glusterp1/images/kworker01.qcow2
/glusterp1/images/kworker02.qcow2
Status: Connected
Number of entries: 5
Brick glusterp3:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
/glusterp1/images/centos-server-001.qcow2
/glusterp1/images/kubernetes-template.qcow2
/glusterp1/images/kworker01.qcow2
/glusterp1/images/kworker02.qcow2
Status: Connected
Number of entries: 5
[root@glusterp1 gv0]#
On 10 May 2018 at 12:20, Thing <thing.thing@xxxxxxxxx> wrote:
[root@glusterp1 gv0]# !737gluster v statusStatus of volume: gv0Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------ ------------------ Brick glusterp1:/bricks/brick1/gv0 49152 0 Y 5229Brick glusterp2:/bricks/brick1/gv0 49152 0 Y 2054Brick glusterp3:/bricks/brick1/gv0 49152 0 Y 2110Self-heal Daemon on localhost N/A N/A Y 5219Self-heal Daemon on glusterp2 N/A N/A Y 1943Self-heal Daemon on glusterp3 N/A N/A Y 2067Task Status of Volume gv0------------------------------------------------------------ ------------------ There are no active volume tasks[root@glusterp1 gv0]# ls -l glusterp1/images/total 2877064-rw-------. 2 root root 107390828544 May 10 12:18 centos-server-001.qcow2-rw-r--r--. 2 root root 0 May 8 14:32 file1-rw-r--r--. 2 root root 0 May 9 14:41 file1-1-rw-------. 2 root root 85912715264 May 10 12:18 kubernetes-template.qcow2-rw-------. 2 root root 0 May 10 12:08 kworker01.qcow2-rw-------. 2 root root 0 May 10 12:08 kworker02.qcow2[root@glusterp1 gv0]#while,[root@glusterp2 gv0]# ls -l glusterp1/images/total 11209084-rw-------. 2 root root 107390828544 May 9 14:45 centos-server-001.qcow2-rw-r--r--. 2 root root 0 May 8 14:32 file1-rw-r--r--. 2 root root 0 May 9 14:41 file1-1-rw-------. 2 root root 85912715264 May 9 15:59 kubernetes-template.qcow2-rw-------. 2 root root 3792371712 May 9 16:15 kworker01.qcow2-rw-------. 2 root root 3792371712 May 10 11:20 kworker02.qcow2[root@glusterp2 gv0]#So some files have re-synced but not the kworker machines network activity has stopped.On 10 May 2018 at 12:05, Diego Remolina <dijuremo@xxxxxxxxx> wrote:Show us output from: gluster v statusIt should be easy to fix. Stop gluster daemon on that node, mount the brick, start gluster daemon again.Check: gluster v statusDoes it show the brick up?HTH,DiegoOn Wed, May 9, 2018, 20:01 Thing <thing.thing@xxxxxxxxx> wrote:______________________________Hi,I have 3 Centos7.4 machines setup as a 3 way raid 1.Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt mount on boot and as a result its empty.Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3 /bricks/brick1/gv0 as expected.Is there a way to get glusterp1's gv0 to sync off the other 2? there must be but,I have looked at the gluster docs and I cant find anything about repairing resyncing?Where am I meant to look for such info?thanks_________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users