Re: Need help to design a data storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What about EC? Are redundant data spread across multiple servers? If not,  multiple replica would be placed on the same server. I can loose 2 bricks (2 disks) but what if I'll loose the whole server with both bricks on it? And when a server fails, multiple bricks are affected .........

-----

Yes, redundant data spread across multiple servers. In my example I mentioned 6 different nodes each with one brick.
Point is that for 4+2 you can loose any 2 bricks. It could be because of node failure or brick failure.
1 - 6 bricks on 6 different nodes - any 2 nodes may go down - EC win

However if you have only 2 nodes and 3 bricks on each nodes, then yes in this case even if one node goes down, ec will fail because that will cause 3 bricks down.
In this case replica 3 would win.



From: "Gandalf Corvotempesta" <gandalf.corvotempesta@xxxxxxxxx>
To: "Ashish Pandey" <aspandey@xxxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Sent: Tuesday, August 9, 2016 11:08:12 PM
Subject: Re: Need help to design a data storage

Il 09 ago 2016 19:20, "Ashish Pandey" <aspandey@xxxxxxxxxx> ha scritto:
> 3 - EC with redundancy 2 that is 4+2
> The over all storage space you get is 4TB and any 2 bricks can be down at any point of time. So it is as good as replica 3 but providing more space.

Not really.
With replica 3 i can set the brick location on different servers so that i can loose multiple servers and not only multiple bricks

What about EC? Are redundant data spread across multiple servers? If not,  multiple replica would be placed on the same server. I can loose 2 bricks (2 disks) but what if I'll loose the whole server with both bricks on it? And when a server fails, multiple bricks are affected .........

replica 3 is like a raid10 with 3 disks in each mirror (3 failed bricks in the same replica set=data loss). EC is like raid6 (3 failed bricks in the whole cluster=data loss). The first is safer than the latter but has a huge waste of space.


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux