Hi,
Although I do not have experience with VM live migration, IIUC, it is got to do with a different server (and as a result a new glusterfs client process) taking over the operations and mgmt of the VM.
If this is a correct assumption, then I think this could be the result of the same caching bug that I talked about sometime back in 3.7.5, which is fixed in 3.7.6.
The issue could cause the new client to not see the correct size and block count of the file, leading to errors in reads (perhaps triggered by the restart of the vm) and writes on the image.
-Krutika
From: "Lindsay Mathieson" <lindsay.mathieson@xxxxxxxxx>
To: "gluster-users" <gluster-users@xxxxxxxxxxx>
Sent: Thursday, November 5, 2015 3:53:25 AM
Subject: File Corruption with shards - 100% reproducableGluster 3.7.5, gluster repos, on proxmox (debian 8)- gluster replica 3, shards on, shard size = 256MBI have an issue with VM images (qcow2) being corrupted.- Gluster nodes are all also VM host nodes- VM image mounted from qemu via gfapiTo reproduce- Start VM- live migrate it to another node- VM will rapidly become unresponsive and have to be stopped- attempting to restart the vm results in a "qcow2: Image is corrupt; cannot be opened read/write" error.I have never seen this before. 100% reproducible with shards on, never happens with shards off.I don't think this happens when using NFS to access the shard volume, I suspect because with NFS it is still accessing the one node, whereas with gfapi it's handed off to the node the VM is running on.
--Lindsay
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users