Hello, I'm cross-posting this from ovirt-users: I have an oVirt environment backed by a two-node Gluster-cluster. Yesterday I decided to upgrade to from GlusterFS 3.5.1 to 3.5.2, but that caused the gluster daemon to stop and now I have several lines like this in my log for the volume that hosts the VM images, called vmimage: [2014-08-02 12:56:20.994767] E [afr-self-heal-common.c:233:afr_sh_print_split_brain_log] 0-vmimage-replicate-0: Unable to self-heal contents of 'f09c211d-eb49-4715-8031-85a5a8f39f18' (possible split-brain). Please delete the file from all but the preferred subvolume.- Pending matrix: [ [ 0 408 ] [ 180 0 ] ] What I would like to do is the following, since I'm not 100% happy anyway with how the volume is setup: - Stop VDSM on the oVirt hosts / unmount the volume - Stop the current vmimage volume and rename it - Create a new vmimage volume - Copy the images from one of the nodes - Start the volume and let it sync - Restart VDSM / mount the volume Is this going to work? Or is there critical metadata that will not be transferred with these steps? Tiemen _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users