Increase Replica Count

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am I to understand this correctly but we have to “Delete” a volume in order to re-Create it with a new replica count? 

 

What I am trying to do is increase the replica count so that two of my Bricks are on two separate nodes in one datacenter, then having another brick on a node in a separate datacenter linked via a 40GB dedicated Fiber line that links the two sites (ie DR).  

Or would a Distributed-Replica count 2 be a better choice, then replicate a brick between the two datacenters?

Most if not All of our Load is at one site.. but if we lose that site we have a HOT standby available… slower yet available.

 

I’d like to have two servers in each datacenter with two bricks each so that we can expand accordingly.  Suggestions? High Availability is the key.. not speed or IOPs

 

What does this do to the system? 

Data loss?

Obviously the Storage would not be available during this time.

 

What happens to the Data that already exists on a Two Node (2 Bricks per node) Distributed-Replica of 2

How does the system then increase the replica count?

Do I have to rebalance the Volume when complete?

 

 

Server1 and Server2 are in the same location

Server3 and Future Server 4 are in the same location

 

As Reference  I am following: https://github.com/GlusterFS/Notes

Extend volume for replica count

Stop volume www: gluster volume stop www Delete volume www: gluster volume delete www Recreate volume but increase replica count to 4 and define all four bricks:

gluster volume create www replica 4 transport tcp server1:/var/export/www server2:/var/export/www server3:/var/export/www server4:/var/export/www

Start volume www and set options:

gluster volume start www

gluster volume set www auth.allow 192.168.56.*

Check volume status: gluster volume info www Volume data should become available again.

 

My CURRENT CONFIG:

Volume Name: storage1

Type: Distributed-Replicate

Volume ID: 9616ce42-48bd-4fe3-883f-decd6c4fcd00

Status: Started

Number of Bricks: 3 x 2 = 6

Transport-type: tcp

Bricks:

Brick1: evtgls1:/exp/br01/brick1

Brick2: evtgls2:/exp/br01/brick1

Brick3: seagls1:/exp/br02/brick2

Brick4: evtgls2:/exp/br02/brick2

Brick5: seagls1:/exp/br01/brick1

Brick6: evtgls1:/exp/br02/brick2

Options Reconfigured:

diagnostics.brick-log-level: WARNING

diagnostics.client-log-level: WARNING

cluster.entry-self-heal: off

cluster.data-self-heal: off

cluster.metadata-self-heal: off

performance.cache-size: 1024MB

performance.cache-max-file-size: 2MB

performance.cache-refresh-timeout: 1

performance.stat-prefetch: off

performance.read-ahead: on

performance.quick-read: off

performance.write-behind-window-size: 4MB

performance.flush-behind: on

performance.write-behind: on

performance.io-thread-count: 32

performance.io-cache: on

network.ping-timeout: 2

nfs.addr-namelookup: off

performance.strict-write-ordering: on

 

 

 

Thomas Holkenbrink

Systems Architect

FiberCloud Inc.  |  www.fibercloud.com

 

 

 

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux