Re: Node count constraints with EC?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




While creating volume just provide bricks which are hosted on different servers.

gluster v create <voluem name> redundancy 2 server-1:/brick1 server-2:/brick2 server-3:/brick3 server-4:/brick4 server-5:/brick5 server-6:/brick6

At present you can not differentiate between data bricks and parity bricks. That is , in above command you can not say which bricks out of brick 1 to brick6 would be parity brick.


From: "Gandalf Corvotempesta" <gandalf.corvotempesta@xxxxxxxxx>
To: "Ashish Pandey" <aspandey@xxxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Sent: Friday, March 31, 2017 12:19:58 PM
Subject: Re: Node count constraints with EC?

How can I ensure that each parity brick is stored on a different server ?

Il 30 mar 2017 6:50 AM, "Ashish Pandey" <aspandey@xxxxxxxxxx> ha scritto:
Hi Terry,

There is not constraint on number of nodes for erasure coded volumes.
However, there are some suggestions to keep in mind.

If you have 4+2 configuration, that means you can loose maximum 2 bricks at a time without loosing your volume for IO.
These bricks may fail because of node crash or node disconnection. That is why it is always good to have all the 6 bricks on 6 different nodes. If you have 3 bricks on one node and this nodes goes down then you
will loose the volume and it will be inaccessible.
So just keep in mind that you should not loose more than redundancy bricks even if any one node goes down.

----
Ashish
    


From: "Terry McGuire" <tmcguire@xxxxxxxxxxx>
To: gluster-users@xxxxxxxxxxx
Sent: Wednesday, March 29, 2017 11:59:32 PM
Subject: Node count constraints with EC?

Hello list.  Newbie question:  I’m building a low-performance/low-cost storage service with a starting size of about 500TB, and want to use Gluster with erasure coding.  I’m considering subvolumes of maybe 4+2, or 8+3 or 4.  I was thinking I’d spread these over 4 nodes, and add single nodes over time, with subvolumes rearranged over new nodes to maintain protection from whole node failures.

However, reading through some RedHat-provided documentation, they seem to suggest that node counts should be a multiple of 3, 6 or 12, depending on subvolume config.  Is this actually a requirement, or is it only a suggestion for best performance or something?

Can anyone comment on node count constraints with erasure coded subvolumes?

Thanks in advance for anyone’s reply,
Terry

_____________________________
Terry McGuire
Information Services and Technology (IST)
University of Alberta
Edmonton, Alberta, Canada  T6G 2H1
Phone:  780-492-9422


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux