On 03/31/2015 05:12 AM, Lilley, John F.
wrote:
If I understand you correctly, you want to replace a brick in a distribute volume with one of lesser capacity. You could first add a new brick and then remove the existing brick with remove-brick start/status/commit sequence. Something like this: ------------------------------------------------ [root@tuxpad ~]# gluster volume info testvol Volume Name: testvol Type: Distribute Volume ID: a89aa154-885c-4e14-8d3a-b555733b11f1 Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: 127.0.0.2:/home/ravi/bricks/brick1 Brick2: 127.0.0.2:/home/ravi/bricks/brick2 Brick3: 127.0.0.2:/home/ravi/bricks/brick3 [root@tuxpad ~]# [root@tuxpad ~]# [root@tuxpad ~]# gluster volume add-brick testvol 127.0.0.2:/home/ravi/bricks/brick{4..6} volume add-brick: success [root@tuxpad ~]# [root@tuxpad ~]# gluster volume info testvol Volume Name: testvol Type: Distribute Volume ID: a89aa154-885c-4e14-8d3a-b555733b11f1 Status: Started Number of Bricks: 6 Transport-type: tcp Bricks: Brick1: 127.0.0.2:/home/ravi/bricks/brick1 Brick2: 127.0.0.2:/home/ravi/bricks/brick2 Brick3: 127.0.0.2:/home/ravi/bricks/brick3 Brick4: 127.0.0.2:/home/ravi/bricks/brick4 Brick5: 127.0.0.2:/home/ravi/bricks/brick5 Brick6: 127.0.0.2:/home/ravi/bricks/brick6 [root@tuxpad ~]# [root@tuxpad ~]# [root@tuxpad ~]# [root@tuxpad ~]# gluster v remove-brick testvol 127.0.0.2:/home/ravi/bricks/brick{1..3} start volume remove-brick start: success ID: d535675e-8362-4a44-a291-1e567a77531e [root@tuxpad ~]# gluster v remove-brick testvol 127.0.0.2:/home/ravi/bricks/brick{1..3} status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 10 0Bytes 20 0 0 completed 0.00 [root@tuxpad ~]# [root@tuxpad ~]# gluster v remove-brick testvol 127.0.0.2:/home/ravi/bricks/brick{1..3} commit Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit: success Check the removed bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. [root@tuxpad ~]# [root@tuxpad ~]# gluster volume info testvol Volume Name: testvol Type: Distribute Volume ID: a89aa154-885c-4e14-8d3a-b555733b11f1 Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: 127.0.0.2:/home/ravi/bricks/brick4 Brick2: 127.0.0.2:/home/ravi/bricks/brick5 Brick3: 127.0.0.2:/home/ravi/bricks/brick6 [root@tuxpad ~]# ------------------------------------------------ Hope this helps. Ravi
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users