On Thu, 27 Jun 2019 at 12:17, Nithya Balachandran <nbalacha@xxxxxxxxxx> wrote:
Hi,On Tue, 25 Jun 2019 at 15:26, Dave Sherohman <dave@xxxxxxxxxxxxx> wrote:I have a 9-brick, replica 2+A cluster and plan to (permanently) remove
one of the three subvolumes. I think I've worked out how to do it, but
want to verify first that I've got it right, since downtime or data loss
would be Bad Things.
The current configuration has six data bricks across six hosts (B
through G), and all three arbiter bricks on the same host (A), such as
one might create with
# gluster volume create myvol replica 3 arbiter 1 B:/data C:/data A:/arb1 D:/data E:/data A:/arb2 F:/data G:/data A:/arb3
My objective is to remove nodes B and C entirely.
First up is to pull their bricks from the volume:
# gluster volume remove-brick myvol B:/data C:/data A:/arb1 start
(wait for data to be migrated)
# gluster volume remove-brick myvol B:/data C:/data A:/arb1 commit
There are some edge cases that may prevent a file from being migrated during a remove-brick. Please do the following after this:
- Check the remove-brick status for any failures. If there are any, check the rebalance log file for errors.
- Even if there are no failures, check the removed bricks to see if any files have not been migrated. If there are any, please check that they are valid files on the brick and that they match on both bricks (files are not in split brain) and copy them to the volume from the brick to the mount point.
You can run the following at the root of the brick to find any files that have not been migrated:
find . -not \( -path ./.glusterfs -prune \) -type f -not -perm 01000
The rest of the steps look good.Regards,NithyaAnd then remove the nodes with:
# gluster peer detach B
# gluster peer detach C
Is this correct, or did I forget any steps and/or mangle the syntax on
any commands?
Also, for the remove-brick command, is there any way to throttle the
amount of bandwidth which will be used for the data migration?
Unfortunately, I was not able to provision a dedicated VLAN for the
gluster servers to communicate among themselves, so I don't want it
hogging all available capacity if that can be avoided.
If it makes a difference, my gluster version is 3.12.15-1, running on
Debian and installed from the debs at
deb https://download.gluster.org/pub/gluster/glusterfs/3.12/LATEST/Debian/9/amd64/apt stretch main
--
Dave Sherohman
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users