Hi Ashish, in attachment you can find the rebalance log file and the last updated brick log file (the other files in /var/log/glusterfs/bricks directory seem to be too old). I just stopped the running rebalance (as you can see at the bottom of the rebalance log file). So, if exists a safe procedure to correct the problem I would like execute it. I don’t know if I can ask you it, but, if it is possible, could you please describe me step by step the right procedure to remove the newly added bricks without losing the data that have been already rebalanced? The following outputs show the result of “df -h” command executed on one of the first 3 nodes (s01, s02, s03) already existing and on one of the last 3 nodes (s04, s05, s06) added recently. [root@s06 bricks]# df -h File system Dim. Usati Dispon. Uso% Montato su /dev/mapper/cl_s06-root 100G 2,1G 98G 3% / devtmpfs 32G 0 32G 0% /dev tmpfs 32G 4,0K 32G 1% /dev/shm tmpfs 32G 26M 32G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/mapper/cl_s06-var 100G 2,0G 99G 2% /var /dev/mapper/cl_s06-gluster 100G 33M 100G 1% /gluster /dev/sda1 1014M 152M 863M 15% /boot /dev/mapper/gluster_vgd-gluster_lvd 9,0T 807G 8,3T 9% /gluster/mnt3 /dev/mapper/gluster_vgg-gluster_lvg 9,0T 807G 8,3T 9% /gluster/mnt6 /dev/mapper/gluster_vgc-gluster_lvc 9,0T 807G 8,3T 9% /gluster/mnt2 /dev/mapper/gluster_vge-gluster_lve 9,0T 807G 8,3T 9% /gluster/mnt4 /dev/mapper/gluster_vgj-gluster_lvj 9,0T 887G 8,2T 10% /gluster/mnt9 /dev/mapper/gluster_vgb-gluster_lvb 9,0T 807G 8,3T 9% /gluster/mnt1 /dev/mapper/gluster_vgh-gluster_lvh 9,0T 887G 8,2T 10% /gluster/mnt7 /dev/mapper/gluster_vgf-gluster_lvf 9,0T 807G 8,3T 9% /gluster/mnt5 /dev/mapper/gluster_vgi-gluster_lvi 9,0T 887G 8,2T 10% /gluster/mnt8 /dev/mapper/gluster_vgl-gluster_lvl 9,0T 887G 8,2T 10% /gluster/mnt11 /dev/mapper/gluster_vgk-gluster_lvk 9,0T 887G 8,2T 10% /gluster/mnt10 /dev/mapper/gluster_vgm-gluster_lvm 9,0T 887G 8,2T 10% /gluster/mnt12 tmpfs 6,3G 0 6,3G 0% /run/user/0 [root@s01 ~]# df -h File system Dim. Usati Dispon. Uso% Montato su /dev/mapper/cl_s01-root 100G 5,3G 95G 6% / devtmpfs 32G 0 32G 0% /dev tmpfs 32G 39M 32G 1% /dev/shm tmpfs 32G 26M 32G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/mapper/cl_s01-var 100G 11G 90G 11% /var /dev/md127 1015M 151M 865M 15% /boot /dev/mapper/cl_s01-gluster 100G 33M 100G 1% /gluster /dev/mapper/gluster_vgi-gluster_lvi 9,0T 5,5T 3,6T 61% /gluster/mnt7 /dev/mapper/gluster_vgm-gluster_lvm 9,0T 5,4T 3,6T 61% /gluster/mnt11 /dev/mapper/gluster_vgf-gluster_lvf 9,0T 5,7T 3,4T 63% /gluster/mnt4 /dev/mapper/gluster_vgl-gluster_lvl 9,0T 5,8T 3,3T 64% /gluster/mnt10 /dev/mapper/gluster_vgj-gluster_lvj 9,0T 5,5T 3,6T 61% /gluster/mnt8 /dev/mapper/gluster_vgn-gluster_lvn 9,0T 5,4T 3,6T 61% /gluster/mnt12 /dev/mapper/gluster_vgk-gluster_lvk 9,0T 5,8T 3,3T 64% /gluster/mnt9 /dev/mapper/gluster_vgh-gluster_lvh 9,0T 5,6T 3,5T 63% /gluster/mnt6 /dev/mapper/gluster_vgg-gluster_lvg 9,0T 5,6T 3,5T 63% /gluster/mnt5 /dev/mapper/gluster_vge-gluster_lve 9,0T 5,7T 3,4T 63% /gluster/mnt3 /dev/mapper/gluster_vgc-gluster_lvc 9,0T 5,6T 3,5T 62% /gluster/mnt1 /dev/mapper/gluster_vgd-gluster_lvd 9,0T 5,6T 3,5T 62% /gluster/mnt2 tmpfs 6,3G 0 6,3G 0% /run/user/0 s01-stg:tier2 420T 159T 262T 38% /tier2 As you can see, used space value of each brick of the last servers is about 800GB. Thank you, Mauro |
Attachment:
rebalance.log.gz
Description: GNU Zip compressed data
Attachment:
gluster-mnt1-brick.logs.gz
Description: GNU Zip compressed data
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users