Good morning Ashish, your explanations are always very useful, thank you very much: I will remember these suggestions for any future needs. Anyway, during the week-end, the remove-brick procedures ended successfully and we were able to free up all bricks defined on server s04, s05 and 6 bricks of 12 on server s06. So, we can say that, thanks to your suggestions, we are about to complete this first phase (removing of all bricks defined on s04, s05 and s06 servers). I really appreciated your support. Now I have a last question (I hope): after remove-brick commit I noticed that some data remain on each brick (about 1.2GB of data). Please, take a look to the “df-h_on_s04_s05_s06.txt”. The situation is almost the same on all 3 servers mentioned above: a long list of directories names and some files that are still on the brick, but respective size is 0. Examples: a lot of empty directories on /gluster/mnt*/brick/.glusterfs 8 /gluster/mnt2/brick/.glusterfs/b7/1b 0 /gluster/mnt2/brick/.glusterfs/b7/ee/b7ee94a5-a77c-4c02-85a5-085992840c83 0 /gluster/mnt2/brick/.glusterfs/b7/ee/b7ee85d4-ce48-43a7-a89a-69c728ee8273 some empty files in directories in /gluster/mnt*/brick/* [root@s04 ~]# cd /gluster/mnt1/brick/ [root@s04 brick]# ls -l totale 32 drwxr-xr-x 7 root root 100 11 set 22.14 archive_calypso [root@s04 brick]# cd archive_calypso/ [root@s04 archive_calypso]# ll totale 0 drwxr-x--- 3 root 5200 29 11 set 22.13 ans002 drwxr-x--- 3 5104 5100 32 11 set 22.14 ans004 drwxr-x--- 3 4506 4500 31 11 set 22.14 ans006 drwxr-x--- 3 4515 4500 28 11 set 22.14 ans015 drwxr-x--- 4 4321 4300 54 11 set 22.14 ans021 [root@s04 archive_calypso]# du -a * 0 ans002/archive/ans002/HINDCASTS/RUN_ATMWANG_LANSENS/19810501.0/echam5/echam_sf006_198110.01.gz 0 ans002/archive/ans002/HINDCASTS/RUN_ATMWANG_LANSENS/19810501.0/echam5 0 ans002/archive/ans002/HINDCASTS/RUN_ATMWANG_LANSENS/19810501.0 0 ans002/archive/ans002/HINDCASTS/RUN_ATMWANG_LANSENS/19810501.1/echam5/echam_sf006_198105.01.gz 0 ans002/archive/ans002/HINDCASTS/RUN_ATMWANG_LANSENS/19810501.1/echam5/echam_sf006_198109.01.gz 8 ans002/archive/ans002/HINDCASTS/RUN_ATMWANG_LANSENS/19810501.1/echam5 What we have to do with this data? Should I backup this “empty” dirs and files on a different storage before deleting them? As soon as all the bricks will be empty, I plan to re-add the new bricks using the following commands: gluster peer detach s04 gluster peer detach s05 gluster peer detach s06 gluster peer probe s04 gluster peer probe s05 gluster peer probe s06 gluster volume add-brick tier2 s04-stg:/gluster/mnt1/brick s05-stg:/gluster/mnt1/brick s06-stg:/gluster/mnt1/brick s04-stg:/gluster/mnt2/brick s05-stg:/gluster/mnt2/brick s06-stg:/gluster/mnt2/brick s04-stg:/gluster/mnt3/brick s05-stg:/gluster/mnt3/brick s06-stg:/gluster/mnt3/brick s04-stg:/gluster/mnt4/brick s05-stg:/gluster/mnt4/brick s06-stg:/gluster/mnt4/brick s04-stg:/gluster/mnt5/brick s05-stg:/gluster/mnt5/brick s06-stg:/gluster/mnt5/brick s04-stg:/gluster/mnt6/brick s05-stg:/gluster/mnt6/brick s06-stg:/gluster/mnt6/brick s04-stg:/gluster/mnt7/brick s05-stg:/gluster/mnt7/brick s06-stg:/gluster/mnt7/brick s04-stg:/gluster/mnt8/brick s05-stg:/gluster/mnt8/brick s06-stg:/gluster/mnt8/brick s04-stg:/gluster/mnt9/brick s05-stg:/gluster/mnt9/brick s06-stg:/gluster/mnt9/brick s04-stg:/gluster/mnt10/brick s05-stg:/gluster/mnt10/brick s06-stg:/gluster/mnt10/brick s04-stg:/gluster/mnt11/brick s05-stg:/gluster/mnt11/brick s06-stg:/gluster/mnt11/brick s04-stg:/gluster/mnt12/brick s05-stg:/gluster/mnt12/brick s06-stg:/gluster/mnt12/brick force gluster volume rebalance tier2 fix-layout start gluster volume rebalance tier2 start From your point of view, are they the right commands to close this repairing task? Thank you very much for your help. Regards, Mauro |
[root@s04 ~]# df -h File system Dim. Usati Dispon. Uso% Montato su /dev/mapper/cl_s04-root 100G 2,3G 98G 3% / devtmpfs 32G 0 32G 0% /dev tmpfs 32G 4,0K 32G 1% /dev/shm tmpfs 32G 90M 32G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/mapper/cl_s04-gluster 100G 33M 100G 1% /gluster /dev/mapper/cl_s04-var 100G 2,2G 98G 3% /var /dev/sda1 1014M 152M 863M 15% /boot /dev/mapper/gluster_vgf-gluster_lvf 9,0T 1,2G 9,0T 1% /gluster/mnt5 /dev/mapper/gluster_vgb-gluster_lvb 9,0T 1,2G 9,0T 1% /gluster/mnt1 /dev/mapper/gluster_vgj-gluster_lvj 9,0T 1,2G 9,0T 1% /gluster/mnt9 /dev/mapper/gluster_vgi-gluster_lvi 9,0T 1,2G 9,0T 1% /gluster/mnt8 /dev/mapper/gluster_vgd-gluster_lvd 9,0T 1,2G 9,0T 1% /gluster/mnt3 /dev/mapper/gluster_vgm-gluster_lvm 9,0T 1,2G 9,0T 1% /gluster/mnt12 /dev/mapper/gluster_vgg-gluster_lvg 9,0T 1,2G 9,0T 1% /gluster/mnt6 /dev/mapper/gluster_vgh-gluster_lvh 9,0T 1,2G 9,0T 1% /gluster/mnt7 /dev/mapper/gluster_vgl-gluster_lvl 9,0T 1,2G 9,0T 1% /gluster/mnt11 /dev/mapper/gluster_vge-gluster_lve 9,0T 1,2G 9,0T 1% /gluster/mnt4 /dev/mapper/gluster_vgc-gluster_lvc 9,0T 1,2G 9,0T 1% /gluster/mnt2 /dev/mapper/gluster_vgk-gluster_lvk 9,0T 1,2G 9,0T 1% /gluster/mnt10 tmpfs 6,3G 0 6,3G 0% /run/user/0 [root@s05 ~]# df -h File system Dim. Usati Dispon. Uso% Montato su /dev/mapper/cl_s05-root 100G 2,1G 98G 3% / devtmpfs 32G 0 32G 0% /dev tmpfs 32G 4,0K 32G 1% /dev/shm tmpfs 32G 90M 32G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/mapper/cl_s05-gluster 100G 33M 100G 1% /gluster /dev/mapper/cl_s05-var 100G 2,3G 98G 3% /var /dev/sda1 1014M 152M 863M 15% /boot /dev/mapper/gluster_vgl-gluster_lvl 9,0T 1,2G 9,0T 1% /gluster/mnt11 /dev/mapper/gluster_vgd-gluster_lvd 9,0T 5,4G 9,0T 1% /gluster/mnt3 /dev/mapper/gluster_vge-gluster_lve 9,0T 5,4G 9,0T 1% /gluster/mnt4 /dev/mapper/gluster_vgj-gluster_lvj 9,0T 1,2G 9,0T 1% /gluster/mnt9 /dev/mapper/gluster_vgc-gluster_lvc 9,0T 5,4G 9,0T 1% /gluster/mnt2 /dev/mapper/gluster_vgf-gluster_lvf 9,0T 5,4G 9,0T 1% /gluster/mnt5 /dev/mapper/gluster_vgm-gluster_lvm 9,0T 1,2G 9,0T 1% /gluster/mnt12 /dev/mapper/gluster_vgk-gluster_lvk 9,0T 1,2G 9,0T 1% /gluster/mnt10 /dev/mapper/gluster_vgh-gluster_lvh 9,0T 1,2G 9,0T 1% /gluster/mnt7 /dev/mapper/gluster_vgi-gluster_lvi 9,0T 1,2G 9,0T 1% /gluster/mnt8 /dev/mapper/gluster_vgb-gluster_lvb 9,0T 5,4G 9,0T 1% /gluster/mnt1 /dev/mapper/gluster_vgg-gluster_lvg 9,0T 5,4G 9,0T 1% /gluster/mnt6 tmpfs 6,3G 0 6,3G 0% /run/user/0 [root@s06 ~]# df -h File system Dim. Usati Dispon. Uso% Montato su /dev/mapper/cl_s06-root 100G 2,1G 98G 3% / devtmpfs 32G 0 32G 0% /dev tmpfs 32G 4,0K 32G 1% /dev/shm tmpfs 32G 82M 32G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/mapper/cl_s06-var 100G 2,3G 98G 3% /var /dev/mapper/cl_s06-gluster 100G 33M 100G 1% /gluster /dev/sda1 1014M 152M 863M 15% /boot /dev/mapper/gluster_vgd-gluster_lvd 9,0T 1,2G 7,5T 18% /gluster/mnt3 /dev/mapper/gluster_vgg-gluster_lvg 9,0T 1,2G 7,5T 18% /gluster/mnt6 /dev/mapper/gluster_vgc-gluster_lvc 9,0T 1,2G 7,5T 18% /gluster/mnt2 /dev/mapper/gluster_vge-gluster_lve 9,0T 1,2G 7,5T 18% /gluster/mnt4 /dev/mapper/gluster_vgj-gluster_lvj 9,0T 3,0T 6,1T 33% /gluster/mnt9 /dev/mapper/gluster_vgb-gluster_lvb 9,0T 1,2G 7,5T 18% /gluster/mnt1 /dev/mapper/gluster_vgh-gluster_lvh 9,0T 3,0T 6,1T 33% /gluster/mnt7 /dev/mapper/gluster_vgf-gluster_lvf 9,0T 1,2G 7,5T 18% /gluster/mnt5 /dev/mapper/gluster_vgi-gluster_lvi 9,0T 3,0T 6,1T 33% /gluster/mnt8 /dev/mapper/gluster_vgl-gluster_lvl 9,0T 3,0T 6,1T 33% /gluster/mnt11 /dev/mapper/gluster_vgk-gluster_lvk 9,0T 3,0T 6,1T 33% /gluster/mnt10 /dev/mapper/gluster_vgm-gluster_lvm 9,0T 3,0T 6,1T 33% /gluster/mnt12 tmpfs 6,3G 0 6,3G 0% /run/user/0
[root@s06 ~]# gluster vol info Volume Name: tier2 Type: Distributed-Disperse Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: Started Snapshot Count: 0 Number of Bricks: 8 x (4 + 2) = 48 Transport-type: tcp Bricks: Brick1: s01-stg:/gluster/mnt1/brick Brick2: s02-stg:/gluster/mnt1/brick Brick3: s03-stg:/gluster/mnt1/brick Brick4: s01-stg:/gluster/mnt2/brick Brick5: s02-stg:/gluster/mnt2/brick Brick6: s03-stg:/gluster/mnt2/brick Brick7: s01-stg:/gluster/mnt3/brick Brick8: s02-stg:/gluster/mnt3/brick Brick9: s03-stg:/gluster/mnt3/brick Brick10: s01-stg:/gluster/mnt4/brick Brick11: s02-stg:/gluster/mnt4/brick Brick12: s03-stg:/gluster/mnt4/brick Brick13: s01-stg:/gluster/mnt5/brick Brick14: s02-stg:/gluster/mnt5/brick Brick15: s03-stg:/gluster/mnt5/brick Brick16: s01-stg:/gluster/mnt6/brick Brick17: s02-stg:/gluster/mnt6/brick Brick18: s03-stg:/gluster/mnt6/brick Brick19: s01-stg:/gluster/mnt7/brick Brick20: s02-stg:/gluster/mnt7/brick Brick21: s03-stg:/gluster/mnt7/brick Brick22: s01-stg:/gluster/mnt8/brick Brick23: s02-stg:/gluster/mnt8/brick Brick24: s03-stg:/gluster/mnt8/brick Brick25: s01-stg:/gluster/mnt9/brick Brick26: s02-stg:/gluster/mnt9/brick Brick27: s03-stg:/gluster/mnt9/brick Brick28: s01-stg:/gluster/mnt10/brick Brick29: s02-stg:/gluster/mnt10/brick Brick30: s03-stg:/gluster/mnt10/brick Brick31: s01-stg:/gluster/mnt11/brick Brick32: s02-stg:/gluster/mnt11/brick Brick33: s03-stg:/gluster/mnt11/brick Brick34: s01-stg:/gluster/mnt12/brick Brick35: s02-stg:/gluster/mnt12/brick Brick36: s03-stg:/gluster/mnt12/brick Brick43: s06-stg:/gluster/mnt7/brick Brick44: s06-stg:/gluster/mnt8/brick Brick45: s06-stg:/gluster/mnt9/brick Brick46: s06-stg:/gluster/mnt10/brick Brick47: s06-stg:/gluster/mnt11/brick Brick48: s06-stg:/gluster/mnt12/brick Options Reconfigured: network.ping-timeout: 0 features.scrub: Active features.bitrot: on features.inode-quota: on features.quota: on performance.client-io-threads: on cluster.min-free-disk: 10 cluster.quorum-type: auto transport.address-family: inet nfs.disable: on server.event-threads: 4 client.event-threads: 4 cluster.lookup-optimize: on performance.readdir-ahead: on performance.parallel-readdir: off cluster.readdir-optimize: on features.cache-invalidation: on features.cache-invalidation-timeout: 600 performance.stat-prefetch: on performance.cache-invalidation: on performance.md-cache-timeout: 600 network.inode-lru-limit: 50000 performance.io-cache: off disperse.cpu-extensions: auto performance.io-thread-count: 16 features.quota-deem-statfs: on features.default-soft-limit: 90 cluster.server-quorum-type: server diagnostics.latency-measurement: on diagnostics.count-fop-hits: on cluster.brick-multiplex: on cluster.server-quorum-ratio: 51%
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users