On 08/02/2014 06:50 PM, Tiemen Ruiten wrote:
Hello,
I'm cross-posting this from ovirt-users:
I have an oVirt environment backed by a two-node Gluster-cluster.
Yesterday I decided to upgrade to from GlusterFS 3.5.1 to 3.5.2, but
that caused the gluster daemon to stop and now I have several lines like
this in my log for the volume that hosts the VM images, called vmimage:
Did the upgrade happen when the volume is still running?
[2014-08-02 12:56:20.994767] E
[afr-self-heal-common.c:233:afr_sh_print_split_brain_log]
0-vmimage-replicate-0: Unable to self-heal contents of
'f09c211d-eb49-4715-8031-85a5a8f39f18' (possible split-brain). Please
delete the file from all but the preferred subvolume.- Pending matrix: [
[ 0 408 ] [ 180 0 ] ]
This is the document that talks about how to resolve split-brains in
gluster.
https://github.com/gluster/glusterfs/blob/master/doc/split-brain.md
What I would like to do is the following, since I'm not 100% happy
anyway with how the volume is setup:
- Stop VDSM on the oVirt hosts / unmount the volume
- Stop the current vmimage volume and rename it
Is this a gluster volume? gluster volumes can't be renamed..
- Create a new vmimage volume
- Copy the images from one of the nodes
where will these images be copied to? on to the gluster mount? if yes
then there is no need to sync.
- Start the volume and let it sync
- Restart VDSM / mount the volume
Is this going to work? Or is there critical metadata that will not be
transferred with these steps?
Tiemen
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users