Am 09.12.2015 um 14:39 schrieb Lindsay
Mathieson:
Nope. All VMs were running on #1, no exception. Nodes #2 and #3 never had a VM running on them, so they were pratically idle since their installation. Basically I set up node #1, including all VMs. Then I've installed nodes #2 and #3, configured Proxmox and Gluster cluster and then waited quite some time until Gluster had synced up nodes #2 and #3 (healing). From then on, I've rebooted nodes 2 & 3, but in theory these nodes never had to do any writes to the Gluster volume at all. If you're interested, you can read about my upgrade strategy in this Proxmox forum post: http://forum.proxmox.com/threads/24990-Upgrade-3-4-HA-cluster-to-4-0-via-reinstallation-with-minimal-downtime?p=125040#post125040 Also, It seems rather strange to me that pratically all ~15 VMs (!) suffered from data corruption. It's like if Gluster considered node #2 or #3 to be ahead and it "healed" in the wrong direction. I don't know.. BTW, once I understood what was going on, with the problematic "healing" still in progress, I was able to overwrite the bad images (still active on #1) by using standard Proxmox backup-restore and Gluster handled it correctly. Anway, I really love the simplicity of Gluster (setting up and maintaining a cluster is extremely easy), but these healing issues are causing some headache to me... ;-) Udo |
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users