Hi all
i am currently running two replica 3 volumes acting as storage for VM images.due to some issues with glusterfs over ext4 filesystem (kernel panics), i tried removing one of the bricks from each volume from a single server, and than re adding them after re-formatting the underlying partition to xfs, on only one of the hosts for testing purposes.
the commands used were:
1) gluster volume remove-brick gv1 replica 2 <server1>:/storage/gv1/brk force
2) gluster volume remove-brick gv2 replica 2 <server1>:/storage/gv2/brk force
3) reformatted /storage/gv1 and /storage/gv2 to xfs (these are the local/physical mountpoints of the gluster bricks)
4) gluster volume add-brick gv1 replica 3 <server1>:/storage/gv1/brk
5) gluster volume add-brick gv2 replica 3 <server1>:/storage/gv2/brk
so far - so good -- both bricks were successfully re added to the volume.
6) gluster volume heal gv1 full
7) gluster volume heal gv2 full
the heal operation started and i can see files being replicated into the newly added bricks BUT - all the files on the two nodes which were not touched are now locked (ReadOnly), i presume, until the heal operation finishes and replicates all the files to the newly added bricks (which might take a while..)
now as far as i understood the documentation of the healing process - the files should not have been locked at all. or am i missing something fundemental here?
is there a way to prevent locking of the source files during a heal -full operation?
is there a better way to perform the process i just described?
your help is enormously appreciated,
Cheers,
Tomer Paretsky
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users