Re: Volume rebalance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> We have a volume of 4 servers 8x2 bricks (Distributed-Replicate) hosting VMs for ESXi, i tried expanding the volume with 8 more bricks, and after rebalancing the volume, the VMs got corrupted.
> [...]
> Is it affecting all of Gluster versions ? is there any workaround or a volume setup that is not affected by this issue ?

Sure sounds like what corrupted everything for me a few months ago :). Had to spend the whole night
re-creating the VMs from backups, and explaining the dataloss and downtime to the clients wasn't easy.

Unfortunatly I believe they never managed to reproduce the issue, so I don't think it was ever fixed,
no. We are using 3.7.13 so downgrading won't help you, I don't know of any workaround.

We decided to just not expand volumes, when one is full we just create a new one instead of
adding bricks to the existing. Not ideal, but not a bid deal, at least yet. Since VMs are
easy enough to live migrate from one volume to another, it seemed like the easiest solution.


-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux