Good morning,
So i have tested the new Gluster 3.10.2, and after starting rebalance two VMs were paused due to storage error and third one was not responding.
After rebalance completed i started the VMs and it did not boot, and throw an XFS wrong inode error into the screen.
My setup:
4 nodes running CentOS7.3 with Gluster 3.10.2
4 bricks in distributed replica with group set to virt.
I added the volume to ovirt and created three VMs, i ran a loop to create 5GB file inside the VMs.
Added new 4 bricks to the existing nodes.
Started rebalane "with force to bypass the warning message"
VMs started to fail after rebalancing.
--
Respectfully
Mahdi A. Mahdi
Respectfully
Mahdi A. Mahdi
From: Krutika Dhananjay <kdhananj@xxxxxxxxxx>
Sent: Wednesday, May 17, 2017 6:59:20 AM
To: gluster-user
Cc: Gandalf Corvotempesta; Lindsay Mathieson; Kevin Lemonnier; Mahdi Adnan
Subject: Rebalance + VM corruption - current status and request for feedback
Sent: Wednesday, May 17, 2017 6:59:20 AM
To: gluster-user
Cc: Gandalf Corvotempesta; Lindsay Mathieson; Kevin Lemonnier; Mahdi Adnan
Subject: Rebalance + VM corruption - current status and request for feedback
Hi,
In the past couple of weeks, we've sent the following fixes concerning VM corruption upon doing rebalance -
https://review.gluster.org/#/q/status:merged+project:glusterfs+branch:master+topic:bug-1440051Although 3.10.2 has a patch that prevents rebalance sub-commands from being executed on sharded volumes, you can override the check by using the 'force' option.
For example,
# gluster volume rebalance myvol start force
Very much looking forward to hearing from you all.
Thanks,
Krutika
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users