Thanx for pointing out...
but doesn't seem to work... or i am too sleepy due to problems with glusterfs and debian8 in other topic which i'm fighting for month..
root@stor1:~# gluster volume heal HA-2TB-TT-Proxmox-cluster split-brain source-brick stor1:HA-2TB-TT-Proxmox-cluster/2TB /images/124/vm-124-disk-1.qcow2
Usage: volume heal <VOLNAME> [{full | statistics {heal-count {replica <hostname:brickname>}} |info {healed | heal-failed | split-brain}}]
seems like wrong command...
2015-07-14 21:23 GMT+03:00 Joe Julian <joe@xxxxxxxxxxxxxxxx>:
On 07/14/2015 11:19 AM, Roman wrote:
https://github.com/gluster/glusterfs/blob/master/doc/features/heal-info-and-split-brain-resolution.mdHi,
played with glusterfs tonight and tried to use recommended XFS for gluster.. first try was pretty bad and all of my VM-s hanged (XFS wants allocsize=64k to create qcow2 files, which i didn't know about and tried to create VM on XFS without this config line in fstab, which lead to a lot of IO-s and qemu says it got time out while creating the file)..
now i've got this:
Brick stor1:/exports/HA-2TB-TT-Proxmox-cluster/2TB/
/images/124/vm-124-disk-1.qcow2 - Is in split-brain
Number of entries: 1
Brick stor2:/exports/HA-2TB-TT-Proxmox-cluster/2TB/
/images/124/vm-124-disk-1.qcow2 - Is in split-brain
ok. what next?
I've deleted one of files, it didn't help. even more, selfheal restored the file on node, where i deleted it... and still split-brain.
how to fix?
--
Best regards,
Roman.
or
https://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
Best regards,
Roman.
Roman.
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users