Has there been an answer to this? I just found out that on our production server one of my 10 raid6 bricks were incorrectly initialized as a raid5. So I am in the same boat as Andrew. I need to relocate the data on that brick, remove the brick from gluster, re-raid, format xfs, add the brick back to gluster and rebalance. On my lab system, I tried a "gluster remove-brick <volume> <raid5-brick> start", waited for the operation "status" to say "completed", and then "committed" to remove the brick configuration from gluster. To my surprise I found about half (wag) my data files were still on the <raid5-brick> and NOT in the glusterfs volume. So apparently the remove-brick operation only migrated a portion of files which resided on the brick being removed. Also, my lab system consists of only 3 bricks with each brick being only about 10% full at the start of the test. Any help/pointers would be appreciated. Bob -----Original Message----- From: gluster-users-bounces@xxxxxxxxxxx [mailto:gluster-users-bounces@xxxxxxxxxxx] On Behalf Of Andrew Smith Sent: Saturday, February 22, 2014 9:53 AM To: gluster-users@xxxxxxxxxxx Subject: Shuffling data to modify brick FS I have a system with 4 bricks, each on an independent server. I have found, unhappily, that I didn't configure my bricks with enough metadata space. I can only increase the size of the metadata by rebuilding the filesystem. So, I wish to 1) Move all the data off of a brick, 2) Rebuild the FS on that brick 3) Add it back 4) Repeat for other 3 bricks. My problem is that the only option to move data off of a brick seems to involve moving the data to a single target drive. Since my drives are about 60% full, none of the other drives can accommodate the entire data set of the removed drive. So, what I want to do is something like: 1) Rebalance my 4-brick system onto 3 bricks 2) Rebuild FS on retired brick 3) Add back refreshed brick and migrate 4) Repeat for other bricks. I can't figure out how to do this from the docs, which seem to only include the case where a brick is replaced. Thanks Andy _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users