Re: Remove and re-add bricks/peers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We'll definitely look into upgrading this, but it's a older, legacy system so we need to see what we can do without breaking it. 

Returning to the re-adding question, what steps do I need to do to clear the config of the failed peers? Do I just wipe the data directory of the volume, or do I need to clear some other config file/folders as well? 

Tom


Op ma 17 jul. 2017 om 16:39 schreef Atin Mukherjee <amukherj@xxxxxxxxxx>:
That's the way. However I'd like to highlight that you're running a very old gluster release. We are currently with 3.11 release which is STM and the long term support is with 3.10. You should consider to upgrade to atleast 3.10.

On Mon, Jul 17, 2017 at 3:25 PM, Tom Cannaerts - INTRACTO <tom.cannaerts@xxxxxxxxxxxx> wrote:
We had some issues with a volume. The volume is a 3 replica volume with 3 gluster 3.5.7 peers. We are now in a situation where only 1 of the 3 nodes is operational. If we restart the node on one of the other nodes, the entire volume becomes unresponsive.

After a lot of trial and error, we have come to the conclusion that we do not wan't to try to rejoin the other 2 nodes in the current form. We would like to completely remove them from the config of the running node, entirely reset the config on the nodes itself and then re-add them as if it was a new node, having it completely sync the volume from the working node.

What would be the correct procedure for this? I assume I can use "gluster volume remove-brick" to force-remove the failed bricks from the volume and decrease the replica count, and then use "gluster peer detach" to force-remove the peers from the config, all on the currently still working node. But what do I need to do to completely clear the config and data of the failed peers? The gluster processes are currently not running on these nodes, but config + data are still present. So basically, I need to be able to clean them out before restarting them, so that they start in a clean state and not try to connect/interfere with the currently still working node. 

Thanks,

Tom


--
Met vriendelijke groeten,
Tom Cannaerts 

Service and Maintenance
Intracto - digital agency


Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com           


Ben je tevreden over deze e-mail?

                         

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

--
Met vriendelijke groeten,
Tom Cannaerts 

Service and Maintenance
Intracto - digital agency


Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com           


Ben je tevreden over deze e-mail?

                         
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux