Re: Creating cluster replica on 2 nodes 2 bricks each.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Hi Nithya

This is what i have so far, I have peer both cluster nodes together as replica, from node 1A and 1B , now when i tried to add it , i get the error that it is already part of a volume. when i run the cluster volume info , i see that has switch to distributed-replica. 

Thanks

Jose





[root@gluster01 ~]# gluster volume status
Status of volume: scratch
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01ib:/gdata/brick1/scratch     49152     49153      Y       3140 
Brick gluster02ib:/gdata/brick1/scratch     49153     49154      Y       2634 
Self-heal Daemon on localhost               N/A       N/A        Y       3132 
Self-heal Daemon on gluster02ib             N/A       N/A        Y       2626 

 

Task Status of Volume scratch
------------------------------------------------------------------------------
There are no active volume tasks

 

[root@gluster01 ~]#

[root@gluster01 ~]# gluster volume info

 

Volume Name: scratch
Type: Replicate
Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp,rdma
Bricks:
Brick1: gluster01ib:/gdata/brick1/scratch
Brick2: gluster02ib:/gdata/brick1/scratch
Options Reconfigured:
performance.readdir-ahead: on
nfs.disable: on
[root@gluster01 ~]#


-------------------------------------

[root@gluster01 ~]# gluster volume add-brick scratch replica 2 gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
volume add-brick: failed: /gdata/brick2/scratch is already part of a volume


[root@gluster01 ~]# gluster volume status
Status of volume: scratch
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01ib:/gdata/brick1/scratch     49152     49153      Y       3140 
Brick gluster02ib:/gdata/brick1/scratch     49153     49154      Y       2634 
Self-heal Daemon on gluster02ib             N/A       N/A        Y       2626 
Self-heal Daemon on localhost               N/A       N/A        Y       3132 

 

Task Status of Volume scratch
------------------------------------------------------------------------------
There are no active volume tasks

 

[root@gluster01 ~]# gluster volume info

 

Volume Name: scratch
Type: Distributed-Replicate
Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp,rdma
Bricks:
Brick1: gluster01ib:/gdata/brick1/scratch
Brick2: gluster02ib:/gdata/brick1/scratch
Brick3: gluster01ib:/gdata/brick2/scratch
Brick4: gluster02ib:/gdata/brick2/scratch
Options Reconfigured:
performance.readdir-ahead: on
nfs.disable: on
[root@gluster01 ~]# 



--------------------------------
Jose Sanchez
Center of Advanced Research Computing
Albuquerque, NM 87131-0001


On Jan 9, 2018, at 9:04 PM, Nithya Balachandran <nbalacha@xxxxxxxxxx> wrote:

Hi,

Please let us know what commands you ran so far and the output of the gluster volume info command.

Thanks,
Nithya

On 9 January 2018 at 23:06, Jose Sanchez <josesanc@xxxxxxxxxxxx> wrote:
Hello

We are trying to setup Gluster for our project/scratch storage HPC machine using a replicated mode with 2 nodes, 2 bricks each (14tb each). 

Our goal is to be able to have a replicated system between node 1 and 2 (A bricks) and add an additional 2 bricks (B bricks)  from the 2 nodes. so we can have a total of 28tb replicated mode. 

Node 1 [ (Brick A) (Brick B) ]
Node 2 [ (Brick A) (Brick B) ]
--------------------------------------------
14Tb + 14Tb = 28Tb

At this  I was able to create the replica nodes between node 1 and 2 (brick A) but I’ve not been able to add to the replica together, Gluster switches to distributed replica   when i add it with only 14Tb.

Any help will be appreciated.

Thanks

Jose

---------------------------------
Jose Sanchez
Center of Advanced Research Computing
Albuquerque, NM 87131


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux