Re: 3.7.16 with sharding corrupts VMDK files when adding and removing bricks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 
>    Having to create multiple cluster is not a solution and is much more
>    expansive.
>    And if you corrupt data from a single cluster you still have issues
> 

Sure, but thinking about it later we realised that it might be for the better.
I believe when sharding is enabled the shards will be dispersed across all the
replica sets, making it that losing a replica set will kill all your VMs.

Imagine a 16x3 volume for example, losing 2 bricks could bring the whole thing
down if they happen to be in the same replica set. (I might be wrong about the
way gluster disperse shards, it's my understanding only, never had the chance
to test it).
With multiple small clusters, we have the same disk space in the end but not
that problem, it's a bit more annoying to manage but for now that's allright.

> 
>    I'm also subscribed to moosefs and lizardfs mailing list and I don't
>    recall any single data corruption/data loss event
> 

Never used those, might be just because there are less users ? Really have no idea,
maybe you are right.

>    If you change the shard size on a populated cluster,A  you break all
>    existing data.

Not really shocked there. Guess the cli should warn you when you try re-setting
the option though, that would be nice.

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux