Adding to my previous mail..
I find a couple of strange errors in the rebalance log (/var/log/glusterfs/sr_vol01-rebalance.log) e.g.: [2015-01-21 10:00:32.123999] E [afr-self-heal-entry.c:1135:afr_sh_entry_impunge_newfile_cbk] 0-sr_vol01-replicate-11: creation of /some/file/on/the/volume.data on sr_vol01-client-23 failed (No space left on device) Why is the rebalance seemingly not taking account of the space left on disks available. This is the current situation on this particular node: [root@gluster03 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 50G 2.4G 45G 5% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 485M 95M 365M 21% /boot /dev/sdb1 1.9T 577G 1.3T 31% /export/brick1gfs03 /dev/sdc1 1.9T 154G 1.7T 9% /export/brick2gfs03 /dev/sdd1 1.9T 413G 1.5T 23% /export/brick3gfs03 /dev/sde1 1.9T 1.5T 417G 78% /export/brick4gfs03 /dev/sdf1 1.9T 1.6T 286G 85% /export/brick5gfs03 /dev/sdg1 1.9T 1.4T 443G 77% /export/brick6gfs03 /dev/sdh1 1.9T 33M 1.9T 1% /export/brick7gfs03 /dev/sdi1 466G 62G 405G 14% /export/brick8gfs03 /dev/sdj1 466G 166G 301G 36% /export/brick9gfs03 /dev/sdk1 466G 466G 20K 100% /export/brick10gfs03 /dev/sdl1 466G 450G 16G 97% /export/brick11gfs03 /dev/sdm1 1.9T 206G 1.7T 12% /export/brick12gfs03 /dev/sdn1 1.9T 306G 1.6T 17% /export/brick13gfs03 /dev/sdo1 1.9T 107G 1.8T 6% /export/brick14gfs03 /dev/sdp1 1.9T 252G 1.6T 14% /export/brick15gfs03 why are brick10 and brick11 over utilised when there is plenty of space on brick 6, 14, etc. ? Anyone any idea? Cheers, Olav On 21/01/15 13:18, Olav Peeters wrote: Hi, |
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users