Hi,
The status shows quite a few failures. Please check the rebalance logs to see why that happened. We can decide what to do based on the errors.
Once you run a commit, the brick will no longer be part of the volume and you will not be able to access those files via the client.
Do you have sufficient space on the remaining bricks for the files on the removed brick?
Regards,
Nithya
On Mon, 4 Feb 2019 at 03:50, mohammad kashif <kashif.alig@xxxxxxxxx> wrote:
_______________________________________________HiI have a pure distributed gluster volume with nine nodes and trying to remove one node, I rangluster volume remove-brick atlasglust nodename:/glusteratlas/brick007/gv0 startIt completed but with around 17000 failuresNode Rebalanced-files size scanned failures skipped status run time in h:m:s--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------nodename 4185858 27.5TB 6746030 17488 0 completed 405:15:34I can see that there is still 1.5 TB of data on the node which I was trying to remove.I am not sure what to do now? Should I run remove-brick command again so the files which has been failed can be tried again?or should I run commit first and then try to remove node again?Please advise as I don't want to remove files.ThanksKashif
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users