Which Ceph release are you running? You mention the balancer, which would imply a certain lower bound. What does `ceph balancer status` show? > > Does anyone know how I can rebalance my cluster to balance out the OSD > usage? > > > > I just added 12 more 14Tb HDDs to my cluster (cluster of made up of 12Tb > and 14Tb disks) bringing my total to 48 OSDs. Ceph df reports my pool as 83% > full (see below). I am aware this only reports the fullest drive. However, > after backfilling, data distribution is still heavily skewed to a few OSDs. > The oldest are at ~80% and the newest are at ~50. My issue is that data is > currently being written to all drives equally and at this rate the cluster > will start disabling writes despite still having significant raw capacity. > PGs do not seem to be evenly distributed either. Weights are 12.73 for all > 14Tb OSD and 10.9 for all 12Tb OSDs. I tried using the balancer to create a > new plan but it had no real effect. Haven't seen much useful info online. > > --- RAW STORAGE --- > > CLASS SIZE AVAIL USED RAW USED %RAW USED > > hdd 582 TiB 203 TiB 379 TiB 380 TiB 65.20 > > TOTAL 582 TiB 203 TiB 379 TiB 380 TiB 65.20 > > > > --- POOLS --- > > POOL ID STORED OBJECTS USED %USED MAX AVAIL > > cephfs_data 2 261 TiB 68.62M 379 TiB 83.64 56 TiB > > cephfs_metadata 3 536 MiB 12.74k 1.6 GiB 0 25 TiB > > > > `osd df`: > > ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META > AVAIL %USE VAR PGS STATUS > > 0 hdd 10.91409 1.00000 11 TiB 8.4 TiB 8.4 TiB 1.6 MiB 22 GiB > 2.5 TiB 77.00 1.18 92 up > > 4 hdd 10.91409 1.00000 11 TiB 6.9 TiB 6.9 TiB 21 MiB 18 GiB > 4.0 TiB 63.03 0.97 80 up > > 5 hdd 10.91409 1.00000 11 TiB 6.5 TiB 6.5 TiB 1.3 MiB 17 GiB > 4.4 TiB 59.50 0.91 71 up > > 11 hdd 12.73340 1.00000 13 TiB 9.4 TiB 9.4 TiB 36 MiB 24 GiB > 3.3 TiB 74.13 1.14 105 up > > 15 hdd 12.73340 1.00000 13 TiB 8.2 TiB 8.2 TiB 20 MiB 21 GiB > 4.5 TiB 64.75 0.99 99 up > > 16 hdd 10.91409 1.00000 11 TiB 6.4 TiB 6.4 TiB 1.7 MiB 16 GiB > 4.5 TiB 58.81 0.90 74 up > > 20 hdd 12.73340 1.00000 13 TiB 8.4 TiB 8.4 TiB 33 MiB 23 GiB > 4.3 TiB 66.09 1.01 102 up > > 22 hdd 12.73340 1.00000 13 TiB 8.6 TiB 8.6 TiB 3.3 MiB 22 GiB > 4.1 TiB 67.48 1.04 100 up > > 23 hdd 12.73340 1.00000 13 TiB 7.6 TiB 7.6 TiB 406 KiB 20 GiB > 5.1 TiB 59.93 0.92 89 up > > 36 hdd 12.73340 1.00000 13 TiB 7.3 TiB 7.2 TiB 171 KiB 16 GiB > 5.5 TiB 57.00 0.87 85 up > > 41 hdd 12.73340 1.00000 13 TiB 7.0 TiB 7.0 TiB 2.1 MiB 16 GiB > 5.7 TiB 54.85 0.84 85 up > > 46 hdd 12.73340 1.00000 13 TiB 8.1 TiB 8.0 TiB 67 MiB 19 GiB > 4.7 TiB 63.34 0.97 99 up > > 3 hdd 10.91409 1.00000 11 TiB 7.6 TiB 7.6 TiB 1.0 MiB 19 GiB > 3.3 TiB 69.85 1.07 87 up > > 6 hdd 10.91409 1.00000 11 TiB 8.3 TiB 8.3 TiB 39 MiB 22 GiB > 2.6 TiB 76.28 1.17 93 up > > 10 hdd 12.73340 1.00000 13 TiB 9.6 TiB 9.6 TiB 5.1 MiB 24 GiB > 3.1 TiB 75.52 1.16 109 up > > 14 hdd 12.73340 1.00000 13 TiB 8.9 TiB 8.8 TiB 34 MiB 22 GiB > 3.9 TiB 69.57 1.07 99 up > > 24 hdd 12.73340 1.00000 13 TiB 11 TiB 11 TiB 22 MiB 34 GiB > 2.1 TiB 83.28 1.28 117 up > > 25 hdd 12.73340 1.00000 13 TiB 8.4 TiB 8.4 TiB 24 MiB 21 GiB > 4.4 TiB 65.84 1.01 97 up > > 26 hdd 12.73340 1.00000 13 TiB 8.9 TiB 8.9 TiB 205 KiB 22 GiB > 3.8 TiB 69.84 1.07 102 up > > 30 hdd 10.91409 1.00000 11 TiB 6.9 TiB 6.9 TiB 33 MiB 16 GiB > 4.0 TiB 63.31 0.97 92 up > > 39 hdd 12.73340 1.00000 13 TiB 7.2 TiB 7.2 TiB 36 MiB 16 GiB > 5.6 TiB 56.39 0.86 93 up > > 43 hdd 12.73340 1.00000 13 TiB 8.3 TiB 8.3 TiB 3.6 MiB 19 GiB > 4.4 TiB 65.54 1.01 98 up > > 47 hdd 12.73340 1.00000 13 TiB 7.1 TiB 7.1 TiB 51 MiB 17 GiB > 5.6 TiB 55.70 0.85 88 up > > 2 hdd 10.91409 1.00000 11 TiB 7.4 TiB 7.3 TiB 22 KiB 19 GiB > 3.6 TiB 67.36 1.03 82 up > > 7 hdd 10.91409 1.00000 11 TiB 8.5 TiB 8.5 TiB 530 KiB 21 GiB > 2.4 TiB 77.89 1.19 97 up > > 8 hdd 12.73340 1.00000 13 TiB 8.5 TiB 8.5 TiB 21 MiB 22 GiB > 4.3 TiB 66.53 1.02 99 up > > 12 hdd 12.73340 1.00000 13 TiB 8.5 TiB 8.5 TiB 20 MiB 22 GiB > 4.2 TiB 67.13 1.03 99 up > > 18 hdd 12.73340 1.00000 13 TiB 9.9 TiB 9.9 TiB 24 MiB 31 GiB > 2.8 TiB 77.81 1.19 112 up > > 19 hdd 12.73340 1.00000 13 TiB 8.8 TiB 8.8 TiB 34 MiB 22 GiB > 3.9 TiB 69.17 1.06 104 up > > 21 hdd 12.73340 1.00000 13 TiB 9.5 TiB 9.4 TiB 15 MiB 28 GiB > 3.3 TiB 74.37 1.14 111 up > > 31 hdd 10.91409 1.00000 11 TiB 6.3 TiB 6.3 TiB 56 MiB 14 GiB > 4.6 TiB 57.53 0.88 77 up > > 32 hdd 10.91409 1.00000 11 TiB 8.0 TiB 8.0 TiB 44 MiB 18 GiB > 2.9 TiB 73.15 1.12 99 up > > 37 hdd 12.73340 1.00000 13 TiB 8.2 TiB 8.1 TiB 4.2 MiB 18 GiB > 4.6 TiB 64.09 0.98 94 up > > 40 hdd 12.73340 1.00000 13 TiB 7.3 TiB 7.3 TiB 1.4 MiB 16 GiB > 5.5 TiB 57.10 0.88 86 up > > 44 hdd 12.73340 1.00000 13 TiB 7.7 TiB 7.7 TiB 2.6 MiB 17 GiB > 5.0 TiB 60.55 0.93 92 up > > 1 hdd 10.91409 1.00000 11 TiB 5.9 TiB 5.9 TiB 55 MiB 15 GiB > 5.0 TiB 54.14 0.83 72 up > > 9 hdd 12.73340 1.00000 13 TiB 9.4 TiB 9.3 TiB 705 KiB 24 GiB > 3.4 TiB 73.57 1.13 109 up > > 13 hdd 12.73340 1.00000 13 TiB 8.0 TiB 8.0 TiB 22 MiB 21 GiB > 4.7 TiB 62.74 0.96 91 up > > 17 hdd 10.91409 1.00000 11 TiB 7.5 TiB 7.5 TiB 24 MiB 19 GiB > 3.4 TiB 68.88 1.06 88 up > > 27 hdd 12.73340 1.00000 13 TiB 6.9 TiB 6.8 TiB 4.8 MiB 17 GiB > 5.9 TiB 53.81 0.83 78 up > > 28 hdd 12.73340 1.00000 13 TiB 7.7 TiB 7.7 TiB 19 MiB 20 GiB > 5.0 TiB 60.46 0.93 91 up > > 29 hdd 12.73340 1.00000 13 TiB 8.3 TiB 8.3 TiB 17 MiB 21 GiB > 4.4 TiB 65.56 1.01 100 up > > 33 hdd 10.91409 1.00000 11 TiB 6.5 TiB 6.5 TiB 1.6 MiB 15 GiB > 4.4 TiB 59.23 0.91 77 up > > 34 hdd 10.91409 1.00000 11 TiB 6.6 TiB 6.6 TiB 44 MiB 15 GiB > 4.3 TiB 60.87 0.93 81 up > > 35 hdd 10.91409 1.00000 11 TiB 6.6 TiB 6.5 TiB 18 MiB 15 GiB > 4.4 TiB 60.08 0.92 81 up > > 38 hdd 12.73340 1.00000 13 TiB 8.4 TiB 8.4 TiB 18 MiB 19 GiB > 4.3 TiB 66.25 1.02 105 up > > 42 hdd 12.73340 1.00000 13 TiB 6.9 TiB 6.9 TiB 21 MiB 16 GiB > 5.8 TiB 54.26 0.83 84 up > > 45 hdd 12.73340 1.00000 13 TiB 7.7 TiB 7.7 TiB 38 MiB 18 GiB > 5.0 TiB 60.61 0.93 94 up > > TOTAL 582 TiB 380 TiB 379 TiB 942 MiB 950 GiB > 203 TiB 65.20 > > > > > Thank you, > > Ricardo > > > > > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx