Urgent :) Procedure for replacing Gluster Node on 3.8.12

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Status: We have a 3 node gluster cluster (proxmox based)
- gluster 3.8.12
- Replica 3
- VM Hosting Only
- Sharded Storage

Or I should say we *had* a 3 node cluster, one node died today. Possibly I can recover it, in whcih case no issues, we just let it heal itself. For now its running happily on 2 nodes with no data loss - gluster for teh win!

But its looking like I might have to replace the node with a new server in which case I won't try anything fancy with trying reuse the existing data on the failed node disks, I'd rather let it resync the 3.2TB over the weekend.

In which case what is the best way to replace the old failed node? the new node would have a new hostname and ip.

Failed node is vna. Lets call the new node vnd

I'm thinking the following:

gluster volume remove-brick datastore4 replica 2 vna.proxmox.softlog:/tank/vmdata/datastore4 force

gluster volume add-brick datastore4 replica 3 vnd.proxmox.softlog:/tank/vmdata/datastore4


Would that be all that is required?

Existing setup below:

gluster v info

Volume Name: datastore4
Type: Replicate
Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4
Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4
Brick3: vna.proxmox.softlog:/tank/vmdata/datastore4
Options Reconfigured:
cluster.locking-scheme: granular
cluster.granular-entry-heal: yes
features.shard-block-size: 64MB
network.remote-dio: enable
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.stat-prefetch: on
performance.strict-write-ordering: off
nfs.enable-ino32: off
nfs.addr-namelookup: off
nfs.disable: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
features.shard: on
cluster.data-self-heal: on
performance.readdir-ahead: on
performance.low-prio-threads: 32
user.cifs: off
performance.flush-behind: on



--
Lindsay
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux