On 19/01/16 22:06, Krutika Dhananjay wrote:
As far as the reverse heal is concerned, there is one issue with
add-brick where replica count is increased, which is still under review.
Could you instead try the following steps at the time of add-brick and
tell me if it works fine:
1. Run 'gluster volume add-brick datastore1 replica 3
vng.proxmox.softlog:/vmdata/datastore1' as usual.
2. Kill the glusterfsd process corresponding to newly added brick (the
brick in vng in your case). You should be able to get its pid in the
output of 'gluster volume status datastore1'.
3. Create a dummy file on the root of the volume from the mount point.
This can be any random name.
4. Delete the dummy file created in step 3.
5. Bring the killed brick back up. For this, you can run 'gluster
volume start datastore1 force'.
6. Then execute 'gluster volume heal datastore1 full' on the node with
the highest uuid (this we know how to do from the previous thread on
the same topic).
Then monitor heal-info output to track heal progress.
I'm afraid it didn't work Krutika, I still got the reverse heal problem.
nb. I am starting from a replica 3 store, removing a brick, cleaning it,
then re-adding it. Possibly that affects the process?
--
Lindsay Mathieson
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users