I had this with balancer active and "crush-compat" MIN/MAX VAR: 0.43/1.59 STDDEV: 10.81 And by increasing the pg of some pools (from 8 to 64) and deleting empty pools, I went to this MIN/MAX VAR: 0.59/1.28 STDDEV: 6.83 (Do not want to go to this upmap yet) -----Original Message----- From: Tarek Zegar [mailto:tzegar@xxxxxxxxxx] Sent: woensdag 29 mei 2019 17:52 To: ceph-users Subject: *****SPAM***** Balancer: uneven OSDs Can anyone help with this? Why can't I optimize this cluster, the pg counts and data distribution is way off. __________________ I enabled the balancer plugin and even tried to manually invoke it but it won't allow any changes. Looking at ceph osd df, it's not even at all. Thoughts? root@hostadmin:~# ceph osd df ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS 1 hdd 0.00980 0 0 B 0 B 0 B 0 0 0 3 hdd 0.00980 1.00000 10 GiB 8.3 GiB 1.7 GiB 82.83 1.14 156 6 hdd 0.00980 1.00000 10 GiB 8.4 GiB 1.6 GiB 83.77 1.15 144 0 hdd 0.00980 0 0 B 0 B 0 B 0 0 0 5 hdd 0.00980 1.00000 10 GiB 9.0 GiB 1021 MiB 90.03 1.23 159 7 hdd 0.00980 1.00000 10 GiB 7.7 GiB 2.3 GiB 76.57 1.05 141 2 hdd 0.00980 1.00000 10 GiB 5.5 GiB 4.5 GiB 55.42 0.76 90 4 hdd 0.00980 1.00000 10 GiB 5.9 GiB 4.1 GiB 58.78 0.81 99 8 hdd 0.00980 1.00000 10 GiB 6.3 GiB 3.7 GiB 63.12 0.87 111 TOTAL 90 GiB 53 GiB 37 GiB 72.93 MIN/MAX VAR: 0.76/1.23 STDDEV: 12.67 root@hostadmin:~# osdmaptool om --upmap out.txt --upmap-pool rbd osdmaptool: osdmap file 'om' writing upmap command output to: out.txt checking for upmap cleanups upmap, max-count 100, max deviation 0.01 <---really? It's not even close to 1% across the drives limiting to pools rbd (1) no upmaps proposed ceph balancer optimize myplan Error EALREADY: Unable to find further optimization,or distribution is already perfect _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com