I set uo a dispersed volume with 1 x (3 + 1) nodes ( i do know that 3+1 is not optimal).
Originally created in version 3.7 but recently upgraded without issue to 3.8.
# gluster vol info
Volume Name: rvolType: Disperse
Volume ID: e8f15248-d9de-458e-9896-
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (3 + 1) = 4
Transport-type: tcp
Bricks:
Brick1: calliope:/brick/p1
Brick2: euterpe:/brick/p1
Brick3: lemans:/brick/p1
Brick4: thalia:/brick/p1
Options Reconfigured:
performance.readdir-ahead: on
nfs.disable: off
I inadvertently allowed one of the nodes (thalia) to be reinstalled; which overwrote the system, but not the brick, and I need guidance in getting it back into the volume.
(on lemans)
gluster peer statusNumber of Peers: 3
Hostname: calliope
Uuid: 72373eb1-8047-405a-a094-
State: Peer in Cluster (Connected)
Hostname: euterpe
Uuid: 9fafa5c4-1541-4aa0-9ea2-
State: Peer in Cluster (Connected)
Hostname: thalia
Uuid: 843169fa-3937-42de-8fda-
State: Peer Rejected (Connected)
the thalia peer is rejected. If I try to peer probe thalia I am told it already part of the pool. If from thalia, I try to peer probe one of the others, I am told that they are already part of another pool.
I have tried removing the thalia brick with
gluster vol remove-brick rvol thalia:/brick/p1 start
but get the error
volume remove-brick start: failed: Remove brick incorrect brick count of 1 for disperse 4I am not finding much guidance for this particular situation. I could use a suggestion on how to recover. It's a lab situation so no biggie if I lose it.
Cheers
Tony Schreiner
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users