Il 12 nov 2016 10:21, "Kevin Lemonnier" <lemonnierk@xxxxxxxxx> ha scritto:
> We've had a lot of problems in the past, but at least for us 3.7.12 (and 3.7.15)
> seems to be working pretty well as long as you don't add bricks. We started doing
> multiple little clusters and abandonned the idea of one big cluster, had no
> issues since :)
>Well, adding bricks could be usefull... :)
Having to create multiple cluster is not a solution and is much more expansive.
And if you corrupt data from a single cluster you still have issuesI think would be better to add less features and focus more to stability.
In gluster-users and ovirt-community we saw people trying gluster and complain about heal times and split-brains. So we had to fix bugs in quorum in 3-way replication; then we started working on features like sharding for better heal times and arbiter volumes for cost benefits.
In a software defined storage, stability and consistency are the most important things
I'm also subscribed to moosefs and lizardfs mailing list and I don't recall any single data corruption/data loss event
In gluster, after some days of testing I've found a huge data corruption issue that is still unfixed on bugzilla.
If you change the shard size on a populated cluster, you break all existing data.
Try to do this on a cluster with working VMs and see what happens....
a single cli command break everything and is still unfixed.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
--
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users