Hi,
I am working on Glusterfs 4.1.6 on a test machine. I am trying to replace a faulty disk and below are the steps I did but wasn't successful with that.
3 Nodes, 2 disks per node, Disperse Volume 4+2 :-
Step 1 :- kill pid of the faulty brick in node
Step 2 :- running volume status, shows "N/A" under 'pid' & 'TCP port'
Step 3 :- replace disk and mount new disk in same mount point where the old disk was mounted
Step 4 :- run command "gluster v start volname force"
Step 5 :- running volume status, shows "N/A" under 'pid' & 'TCP port'
expected behavior was a new brick process & heal should have started.
following above said steps 3.10.1 works perfectly, starting a new brick process and heal begins.
But the same step not working in 4.1.6, Did I miss any steps? what should be done?
Amudhan
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users