>Do I do the following: >gluster peer detach urd-gds-021 >gluster peer probe urd-gds-021 >gluster volume replace-brick gds-home urd-gds-021:/brick urd-gds-021/brick >I just want to be sure before I enter any commands so I do not destroy instead if repairing. I saw recently in the mails that the most appropriate way would be to reduce the replica count (remove-brick) and then increase the replica count (add-brick). I guess something like: - gluster volume remove-brick gds-home replica 1 <previously-failed-host>:/brick <arbiter-node>:/brick force - gluster peer detach <previously-failed-host> - gluster peer probe <newly-reinstalled-host> On the arbiter: umount /brick mkfs.xfs -f -i size=512 /brick mount /brick #Avoid using bricks that are actually a mount point: mkdir /brick/brick gluster volume add-brick gds-home replica 3 arbiter 1 <newly-reinstalled-host>:/brick/brick <arbiter-node>:/brick/brick Then trigger a full heal: gluster volume heal gds-hom full P.S.: The approach you have described is also valid, so stick with whatever you feel comfortable with. Don't forget to test you changes before pushing them to production. Best Regards, Strahil Nikolov ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users