On 11/15/2017 12:54 PM, Daniel Berteaud
wrote:
If it is only the brick that is faulty on the bad node, but everything else is fine, like glusterd running, the node being a part of the trusted storage pool etc, you could just kill the brick first and do step-13 in "10.6.2. Replacing a Host Machine with the Same Hostname", (the mkdir of non-existent dir, followed by setfattr of non-existent key) of https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/pdf/Administration_Guide/Red_Hat_Storage-3.1-Administration_Guide-en-US.pdf, then restart the brick by restarting glusterd on that node. Read 10.5 and 10.6 sections in the doc to get a better understanding of replacing bricks. In fact, what would be the difference between reconnecting the brick with a wiped FS, and using |
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users