Re: Move brick to new Node ( Distributed - Replicated Mode )

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Anuradha,

Thanks for the advice, the version i use is 3.8. 

You are correct with my requirement with original 2 nodes ( Distributed - Replicated Mode ) expanded to 3 nodes  ( Distributed - Replicated Mode ).May i know the additional work for heal to be triggered based on the glusterfs version 3.8 ?

My another question is : IF i only have 2 bricks in each nodes, any possibility add the 4th node ( also only have 2 bricks ) or 5th node etc.. to the cluster ? With the condition cannot add more brick to existing node. 


Thank you for the guide.

Jason


On Fri, Aug 12, 2016 at 3:19 PM, Anuradha Talur <atalur@xxxxxxxxxx> wrote:


----- Original Message -----
> From: "tecforte jason" <tecforte.jason@xxxxxxxxx>
> To: gluster-users@xxxxxxxxxxx
> Sent: Friday, August 12, 2016 5:41:07 AM
> Subject: Move brick to new Node ( Distributed - Replicated    Mode )
>
> Hi,
>
> If i have existing 2 nodes with Distributed - Replicated setup like below :
>
> gluster volume create test-volume replica 2
> node1:/exp1/brick1 node2:/exp2/brick2
> node1:/exp1/brick3 node2:/exp2/brick4
> node1:/exp1/brick5 node2:/exp2/brick6
>
> And now i want to add another new node to the gluster and
> node2:/exp2/brick2 replicated with the new brick node3"/exp3/brick7
>
> with the condition cannot add brick to existing node 1 and node 2.

What version of glusterfs are you using?

If I understand correctly, the current volume configuration is as follows:

node1:/exp1/brick1 node2:/exp2/brick2 (replica of each other)
node1:/exp1/brick3 node2:/exp2/brick4
node1:/exp1/brick5 node2:/exp2/brick6

And you want to change it to:

node2:/exp2/brick2 node3:/exp3/brick7 (replica of each other)
node1:/exp1/brick3 node2:/exp2/brick4
node1:/exp1/brick5 node2:/exp2/brick6

You can do the following:
gluster peer probe node 3 (run this from one of the nodes in your cluster)
gluster v replace-brick <volname> node1:/exp1/brick1 node3:/exp3/brick7 commit force

This should now make node2:/exp2/brick2 and node3:/exp3/brick7 as replica of each other.

You might have to do some additional work for heal to be triggered based on the glusterfs version
you are using. So do mention the gluster version being used.

Let me know if my understanding of the requirement given was incorrect.

Hope this helps.
>
> May i know how to do this ?
>
> Appreciate for the advice.
>
> Thanks
> Jason
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users

--
Thanks,
Anuradha.

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux