Dear cephers, with one osd down(200GB/9.1TB data), rebalance takes 3 hours still in progress. Client bandwidth can go as high as 200MB/s. With little client request throughput, recovery goes at couple MB/s. I wonder if there is configuration to polish for improvement. It runs with quincy 17.2.5, deployed by cephadm. The slowness can do harm in peak hours of usage. Best wishes, Ben ------------------------------------------------- volumes: 1/1 healthy pools: 8 pools, 209 pgs objects: 93.04M objects, 4.8 TiB usage: 15 TiB used, 467 TiB / 482 TiB avail pgs: 1206837/279121971 objects degraded (0.432%) 208 active+clean 1 active+undersized+degraded+remapped+backfilling io: client: 80 KiB/s rd, 420 KiB/s wr, 12 op/s rd, 29 op/s wr recovery: 6.2 MiB/s, 113 objects/s _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx