To add to this it appears that replace brick is in a broken state. I can't abort it, or commit it. And I can run any other commands until it thinks the replace-brick is complete.
Is there a way to manually remove the task since it failed?
root@pixel-glusterfs1:/# gluster volume status gdata2tb
Status of volume: gdata2tb
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 10.0.1.31:/mnt/data2tb/gbrick3 49157 Y 14783
Brick 10.0.1.152:/mnt/raid10/gbrick3 49158 Y 2622
Brick 10.0.1.153:/mnt/raid10/gbrick3 49153 Y 3034
NFS Server on localhost 2049 Y 14790
Self-heal Daemon on localhost N/A Y 14794
NFS Server on 10.0.0.205 N/A N N/A
Self-heal Daemon on 10.0.0.205 N/A Y 10323
NFS Server on 10.0.1.153 2049 Y 12735
Self-heal Daemon on 10.0.1.153 N/A Y 12742
NFS Server on 10.0.1.152 2049 Y 2629
Self-heal Daemon on 10.0.1.152 N/A Y 2636
Task ID Status
---- -- ------
Replace brick 1dace9f0-ba98-4db9-9124-c962e74cce07 completed
---------- Forwarded message ----------
From: Joseph Jozwik <jjozwik@xxxxxxxxxxxxxx>
Date: Tue, Aug 26, 2014 at 3:42 PM
Subject: Moving brick of replica volume to new mount on filesystem.
To: gluster-users@xxxxxxxxxxx
From: Joseph Jozwik <jjozwik@xxxxxxxxxxxxxx>
Date: Tue, Aug 26, 2014 at 3:42 PM
Subject: Moving brick of replica volume to new mount on filesystem.
To: gluster-users@xxxxxxxxxxx
Hello,
I need to move a brick to another location on the filesystem.
My initial plan was to stop the gluster server with
1. service glusterfs-server stop
2. rsync -ap brick3 folder to new volume on server
3. umount old volume and bind mount the new to the same location.
However I stopped the glusterfs-server on the node and there was still background processes running glusterd. So I was not sure how to safely stop them.
I also attempted to replace-brick to a new location on the server but that did not work with "volume replace-brick: failed: Commit failed on localhost. Please check the log file for more details."
Then attempted remove brick with
"volume remove-brick gdata2tb replica 2 10.0.1.31:/mnt/data2tb/gbrick3 start"
gluster> volume remove-brick gdata2tb 10.0.1.31:/mnt/data2tb/gbrick3 status
volume remove-brick: failed: Volume gdata2tb is not a distribute volume or contains only 1 brick.
Not performing rebalance
gluster>
Volume Name: gdata2tb
Type: Replicate
Volume ID: 6cbcb2fc-9fd7-467e-9561-bff1937e8492
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.0.1.31:/mnt/data2tb/gbrick3
Brick2: 10.0.1.152:/mnt/raid10/gbrick3
Brick3: 10.0.1.153:/mnt/raid10/gbrick3
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users