Hi Udo, thanks for posting your volume
info settings. Please note for the following, I am not one of the
devs, just a user, so unfortunately I have no authoritative
answers :(
I am running a very similar setup - Proxmox 4.0, three nodes, but using ceph for our production storage. Am heavily testing gluster 3.7 on the side. We find the performance of ceph slow on these small setups and management of it a PITA. Some more questions - how are your VM images being accessed by Proxmox? gfapi? (Proxmox Gluster storage type) or by using the fuse mount? - whats your underlying filesystem (ext4, zfs etc) - Are you using the HA/Watchdog system in Proxmox? On 07/12/15 21:03, Udo Giacomozzi wrote: esterday I had a strange situation where Gluster healing corrupted *all* my VM images. :( sounds painful - my sympathies. You're running 3.5.2 - thats getting rather old. I use the gluster debian repos: 3.6.7 : http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/Debian/ 3.7.6 : http://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/jessie/ 3.6.x is the latest stable, 3.7 is close to stable(?) 3.7 has some nice new features such as sharding, which is very useful for VM hosting - it enables much faster heal times. Regards what happened with your VM's, I'm not sure. Having two servers down should have disabled the entire store making it not readable or writable. I note that you are missing some settings that need to be set for VM stores - there will be corruption problems if you live migrate without them. quick-read=off read-ahead=off io-cache=off stat-prefetch=off eager-lock=enable remote-dio=enable quorum-type=auto server-quorum-type=server "stat-prefetch=off" is particularly important. |
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users