Re: Glusterfs Rack-Zone Awareness feature...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Actually, "gluster volume add-brick vol_name server3:/export/brick1/1" would fail because you have to add bricks in multiples of replica count, ie "gluster volume add-brick vol_name server3:/export/brick1/1 server4:/export/brick1/1" and rebalance.

The bricks on server1 and server2 will be replicas and server3 and server4. Files will be distributed across those two replica pairs according to filename hashes according to the dht algorithm.

On April 22, 2014 8:17:24 AM PDT, "COCHE Sébastien" <SCOCHE@xxxxxxxx> wrote:

Sorry if my question is not clear.

When I create a new replicated volume, using only 2 nodes, I use this command line : ‘gluster volume create vol_name replica 2 transport tcp server1:/export/brick1/1 server2:/export/brick1/1’

server1 and server2 are in 2 different datacenters.

Now, if I want to expand gluster volume, using 2 new servers (ex : server3 and server4) , I use those command lines :

‘gluster volume add-brick vol_name server3: /export/brick1/1’

‘gluster volume add-brick vol_name server4: /export/brick1/1’

‘gluster volume rebalance vol_name fix-layout start’

‘gluster volume rebalance vol_name  start’

How the rebalance command work ?

How to be sure that replicated data are not stored on servers hosted in the same datacenter ?

 

Sébastien

 

-----Message d'origine-----
De : Jeff Darcy [mailto:jdarcy@xxxxxxxxxx]
Envoyé : vendredi 18 avril 2014 18:52
À : COCHE Sébastien
Cc : gluster-users@xxxxxxxxxxx
Objet : Re: Glusterfs Rack-Zone Awareness feature...

 

> I do not understand why it could be a problem to place the data's

> replica on a different node group.

> If a group of node become unavailable (due to datacenter failure, for

> example) volume should remain online, using the second group.

 

I'm not sure what you're getting at here.  If you're talking about initial placement of replicas, we can place all members of each replica set in different node groups (e.g. racks).  If you're talking about adding new replica members when a previous one has failed, then the question is *when*.

Re-populating a new replica can be very expensive.  It's not worth starting if the previously failed replica is likely to come back before you're done.

We provide the tools (e.g. replace-brick) to deal with longer term or even permanent failures, but we don't re-replicate automatically.  Is that what you're talking about?



Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux