migrating data with "remove-brick"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

There is a post some time ago about migrating data using “remove-brick”.

 

http://www.gluster.org/pipermail/gluster-users/2012-October/034473.html

 

Is that approach reliable? Why is it still not officially documented?

 

I followed the instruction to run the command on a distributed volume bricks, using gluster version 3.4.2. It works for small number of files. However, when I tried with a large storage space with a large number of files, after the migration, it looks like some files are missing.

 

Here it is the process:

 

(1)  Check volume info

 

# gluster volume info

 

Volume Name: content

Type: Distribute

Volume ID: 48485504-e89d-4146-afd3-356397e5d541

Status: Started

Number of Bricks: 3

Transport-type: tcp

Bricks:

Brick1: adsfsvip:/data/c01

Brick2: adsfsvip:/data/c03

Brick3: adsfsvip:/data/c02

Options Reconfigured:

nfs.disable: OFF

 

(2) Check number of files in the volume, for verification of content migration and rebalance.

 

# for i in 1 2 3; do tree /data/c0$i | grep files; done

661 directories, 27126 files

661 directories, 15619 files

661 directories, 28531 files

 

# tree /data | grep files

1995 directories, 71276 files

 

(3) Running command "remove-brick" with "start" to start migrating content. Suppose we want to migrate content on Brick3 to other bricks.

 

# gluster volume remove-brick content adsfsvip:/data/c02 start

volume remove-brick start: success

ID: 06891565-b584-4ab5-a015-3bda991f2454

 

(4) The data migration may take some time, run the following command to periodically check the migration process.

 

[root@ads1 glusterfs]# while [ 1 ] ; do gluster volume remove-brick content adsfsvip:/data/c02 status; echo; sleep 2 ; done

                                    Node Rebalanced-files          size       scanned      failures       skipped         status run-time in secs

                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------

                               localhost             4821        19.0GB         13427             0    in progress           286.00

 

                                    Node Rebalanced-files          size       scanned      failures       skipped         status run-time in secs

                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------

                               localhost             4849        19.1GB         13455             0    in progress           288.00

 

                                    Node Rebalanced-files          size       scanned      failures       skipped         status run-time in secs

                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------

                               localhost             4877        19.3GB         13656             0    in progress           290.00

 

                              ...

 

                                   Node Rebalanced-files          size       scanned      failures       skipped         status run-time in secs

                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------

                               localhost            24864       148.3GB         61459             0    in progress          1525.00

 

                                    Node Rebalanced-files          size       scanned      failures       skipped         status run-time in secs

                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------

                               localhost            24871       148.3GB         61465             0      completed          1526.00

 

(5) Check the number of files at each brick in the volume. The number of files at the brick we just ran "remove-brick" over should be 0.

 

[root@ads1 glusterfs]# for i in 1 2 3; do tree /data/c0$i | grep files; done

661 directories, 29533 files

661 directories, 0 files

661 directories, 35666 files

 

(6) However, the total number of files is reduced from 71276 to 65199. Do we lose some files in the migration process?

 

[root@ads1 glusterfs]# tree /data | grep files

1995 directories, 65199 files

 

 

Thank you and best regards,

 

Leo

 

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux