Hello. I'm migrating from Nautilus -> Quincy. Data is being replicated between clusters. As data is migrated (currently about 60T), the Qunicy cluster repeatedly doesn't seem to do a good job at balancing pgs across all OSD. Never had this issue with Nautilus or other versions. Running Quincy 17.2.5, Three node cluster, 30 OSDs (all 8T each). Had to manually scale Pgs (autoscaler doesn't kick in for whatever reason) as data is copied over. I have two primary pools: RGW Data (Erasure code, 4+2, assigned 128 pgs) Data pool (RDB) (Replciated meta x3, datapool erasure code 4+2 128 pgs). A few OSDs are nearing capacity with others underutilized. How to get this balanced? # ceph osd df ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS 0 hdd 7.15359 1.00000 7.2 TiB 3.2 TiB 3.2 TiB 8 KiB 17 GiB 3.9 TiB 45.35 1.64 119 up 1 hdd 7.18900 1.00000 7.2 TiB 3.8 TiB 3.8 TiB 331 MiB 21 GiB 3.4 TiB 53.36 1.92 188 up 2 hdd 7.18900 1.00000 7.2 TiB 3.7 TiB 3.6 TiB 7 KiB 19 GiB 3.5 TiB 50.83 1.83 123 up 3 hdd 7.27739 1.00000 7.3 TiB 5.5 TiB 5.5 TiB 9 KiB 19 GiB 1.7 TiB 76.26 2.75 170 up 4 hdd 7.27739 1.00000 7.3 TiB 591 GiB 587 GiB 5 KiB 3.7 GiB 6.7 TiB 7.93 0.29 6 up 5 hdd 7.27739 1.00000 7.3 TiB 38 GiB 37 GiB 2.2 MiB 1.4 GiB 7.2 TiB 0.51 0.02 40 up 6 hdd 7.27739 1.00000 7.3 TiB 3.4 TiB 3.4 TiB 20 KiB 19 GiB 3.9 TiB 46.52 1.68 125 up 7 hdd 7.27739 1.00000 7.3 TiB 284 GiB 282 GiB 8 KiB 1.6 GiB 7.0 TiB 3.80 0.14 34 up 8 hdd 7.27739 1.00000 7.3 TiB 546 GiB 543 GiB 2 KiB 3.2 GiB 6.7 TiB 7.32 0.26 8 up 9 hdd 7.27739 1.00000 7.3 TiB 433 GiB 431 GiB 23 KiB 2.6 GiB 6.9 TiB 5.82 0.21 83 up 10 hdd 7.15359 1.00000 7.2 TiB 10 GiB 9.3 GiB 5 KiB 704 MiB 7.1 TiB 0.14 0.00 3 up 11 hdd 7.27739 1.00000 7.3 TiB 978 GiB 973 GiB 9 KiB 4.9 GiB 6.3 TiB 13.13 0.47 11 up 12 hdd 7.18900 1.00000 7.2 TiB 5.1 TiB 5.1 TiB 333 MiB 18 GiB 2.1 TiB 71.42 2.58 167 up 13 hdd 7.27739 1.00000 7.3 TiB 15 GiB 14 GiB 10 KiB 1.4 GiB 7.3 TiB 0.20 0.01 39 up 14 hdd 7.18900 1.00000 7.2 TiB 706 GiB 702 GiB 1 KiB 3.7 GiB 6.5 TiB 9.59 0.35 47 up 15 hdd 7.27739 1.00000 7.3 TiB 3.7 TiB 3.7 TiB 7 KiB 19 GiB 3.6 TiB 50.55 1.82 157 up 16 hdd 7.27739 1.00000 7.3 TiB 433 GiB 430 GiB 10 MiB 2.7 GiB 6.9 TiB 5.81 0.21 80 up 17 hdd 7.27739 1.00000 7.3 TiB 3.0 TiB 3.0 TiB 16 KiB 17 GiB 4.3 TiB 41.26 1.49 120 up 18 hdd 7.27739 1.00000 7.3 TiB 5.2 TiB 5.1 TiB 15 KiB 18 GiB 2.1 TiB 70.94 2.56 170 up 19 hdd 7.27739 1.00000 7.3 TiB 545 GiB 542 GiB 4 KiB 3.1 GiB 6.7 TiB 7.31 0.26 9 up 20 hdd 7.15359 1.00000 7.2 TiB 434 GiB 431 GiB 336 MiB 2.8 GiB 6.7 TiB 5.92 0.21 78 up 21 hdd 7.18900 1.00000 7.2 TiB 5.2 TiB 5.2 TiB 8 KiB 18 GiB 2.0 TiB 72.03 2.60 169 up 22 hdd 7.18900 1.00000 7.2 TiB 287 GiB 285 GiB 1 KiB 1.6 GiB 6.9 TiB 3.90 0.14 5 up 23 hdd 7.27739 1.00000 7.3 TiB 407 GiB 404 GiB 11 MiB 3.0 GiB 6.9 TiB 5.46 0.20 43 up 24 hdd 7.18900 1.00000 7.2 TiB 5.3 TiB 5.3 TiB 7 KiB 19 GiB 1.9 TiB 73.64 2.66 172 up 25 hdd 7.27739 1.00000 7.3 TiB 5.6 TiB 5.5 TiB 7 KiB 20 GiB 1.7 TiB 76.31 2.75 173 up 26 hdd 7.27739 1.00000 7.3 TiB 591 GiB 587 GiB 13 KiB 3.7 GiB 6.7 TiB 7.93 0.29 7 up 27 hdd 7.27739 1.00000 7.3 TiB 294 GiB 292 GiB 9 KiB 2.0 GiB 7.0 TiB 3.95 0.14 41 up 28 hdd 7.27739 1.00000 7.3 TiB 36 GiB 36 GiB 4 KiB 487 MiB 7.2 TiB 0.49 0.02 6 up 29 hdd 7.27739 1.00000 7.3 TiB 1.1 TiB 1.1 TiB 12 KiB 6.1 GiB 6.2 TiB 15.33 0.55 7 up TOTAL 217 TiB 60 TiB 60 TiB 1024 MiB 272 GiB 157 TiB 27.73 MIN/MAX VAR: 0.00/2.75 STDDEV: 28.33 Thank you. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx