We had some issues with a volume. The volume is a 3 replica volume with 3 gluster 3.5.7 peers. We are now in a situation where only 1 of the 3 nodes is operational. If we restart the node on one of the other nodes, the entire volume becomes unresponsive.
After a lot of trial and error, we have come to the conclusion that we do not wan't to try to rejoin the other 2 nodes in the current form. We would like to completely remove them from the config of the running node, entirely reset the config on the nodes itself and then re-add them as if it was a new node, having it completely sync the volume from the working node.
What would be the correct procedure for this? I assume I can use "gluster volume remove-brick" to force-remove the failed bricks from the volume and decrease the replica count, and then use "gluster peer detach" to force-remove the peers from the config, all on the currently still working node. But what do I need to do to completely clear the config and data of the failed peers? The gluster processes are currently not running on these nodes, but config + data are still present. So basically, I need to be able to clean them out before restarting them, so that they start in a clean state and not try to connect/interfere with the currently still working node.
Thanks,
Tom
--
Met vriendelijke groeten,
Tom Cannaerts
Service and Maintenance
Intracto - digital agency
Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com
Tom Cannaerts
Service and Maintenance
Intracto - digital agency
Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com
Ben je tevreden over deze e-mail?
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users