Hi,
We have a volume of 4 servers 8x2 bricks (Distributed-Replicate) hosting VMs for ESXi, i tried expanding the volume with 8 more bricks, and after rebalancing the volume, the VMs got corrupted. Gluster version is 3.8.9 and the volume is using the default parameters of group "virt" plus sharding. I created a new volume without sharding and got the same issue after the rebalance. I checked the reported bugs and the mailing list, and i noticed it's a bug in Gluster. Is it affecting all of Gluster versions ? is there any workaround or a volume setup that is not affected by this issue ?
Thank you. --
Respectfully Mahdi A. Mahdi |
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users