On 06/15/2016 11:06 AM, Gandalf Corvotempesta wrote: > Il 15 giu 2016 07:09, "Atin Mukherjee" <amukherj@xxxxxxxxxx > <mailto:amukherj@xxxxxxxxxx>> ha scritto: >> To get rid of this situation you'd need to stop all the running glusterd >> instances and go into /var/lib/glusterd/peers folder on all the nodes >> and manually correct the UUID file names and their content if required. > > If i understood properly the only way to fix this is by bringing the > whole cluster down? "you'd need to stop all the running glusterd instances" > > I hope you are referring to all instances on the failed node... No, since the configuration are synced across all the nodes, any incorrect data gets replicated through out. So in this case to be on the safer side and validate the correctness all glusterd instances on *all* the nodes should be brought down. Having said that, this doesn't impact I/O as the management path is different than I/O. > _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users