Re: Volume rebalance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We fixed this (thanks to Satheesaran for recreating the issue and to Raghavendra G and Pranith for the RCA) as recently as last week.
The bug was in DHT-shard interaction.

The patches are https://review.gluster.org/#/c/16709/ followed by https://review.gluster.org/#/c/14419 to be applied in that order.

Do you mind giving these a try before it makes it into the next .x releases of 3.8, 3.9 and 3.10?
I could make the src tarball with these patches applied if you like.

-Krutika

On Sat, Feb 25, 2017 at 8:56 PM, Mahdi Adnan <mahdi.adnan@xxxxxxxxxxx> wrote:

Hi,


We have a volume of 4 servers 8x2 bricks (Distributed-Replicate) hosting VMs for ESXi, i tried expanding the volume with 8 more bricks, and after rebalancing the volume, the VMs got corrupted.

Gluster version is 3.8.9 and the volume is using the default parameters of group "virt" plus sharding.

I created a new volume without sharding and got the same issue after the rebalance.

I checked the reported bugs and the mailing list, and i noticed it's a bug in Gluster.

Is it affecting all of Gluster versions ? is there any workaround or a volume setup that is not affected by this issue ?


Thank you.


--

Respectfully
Mahdi A. Mahdi



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux