Follow the steps at:
http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-brick
Read the steps at section:
"Replacing brick in Replicate/Distributed Replicate volumes".
We are working on making all the extra steps vanish and just one
command will take care of everything going forward. Will update
gluster-users once that happens.
Pranith
On 10/09/2015 12:50 AM, Gene Liverman
wrote:
So... this kinda applies to me too and I want to
get some clarification: I have the following setup
# gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID:
fc50d049-cebe-4a3f-82a6-748847226099
Status: Started
Number of Bricks: 1 x 3
= 3
Transport-type: tcp
Bricks:
Brick1:
eapps-gluster01:/export/sdb1/gv0
Brick2:
eapps-gluster02:/export/sdb1/gv0
Brick3:
eapps-gluster03:/export/sdb1/gv0
Options Reconfigured:
diagnostics.count-fop-hits:
on
diagnostics.latency-measurement:
on
nfs.drc: off
eapps-gluster03 had a hard drive failure so I replaced it,
formatted the drive and now need gluster to be happy again.
Gluster put a .glusterfs folder in /export/sdb1/gv0 but
nothing else has shown up and the brick is offline. I read the
docs on replacing a brick but seem to be missing something and
would appreciate some help. Thanks!
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users