Thanks, Liang. But this doesn't help since Ceph 17. Setting the mclock profile to "high recovery" speeds up a little bit. The main problem remains: 95% of the recovery time is needed for just one PG. This was not the case before Quincy. 郑亮 <zhengliang0901@xxxxxxxxx> schrieb am Mo., 26. Dez. 2022, 03:52: > Hi erich, > You can reference following link: > https://www.suse.com/support/kb/doc/?id=000019693 > > Thanks, > Liang Zheng > > > E Taka <0etaka0@xxxxxxxxx> 于2022年12月16日周五 01:52写道: > >> Hi, >> >> when removing some OSD with the command `ceph orch osd rm X`, the >> rebalancing starts very fast, but after a while it almost stalls with a >> very low recovering rate: >> >> Dec 15 18:47:17 … : cluster [DBG] pgmap v125312: 3361 pgs: 13 >> active+clean+scrubbing+deep, 4 active+remapped+backfilling, 3344 >> active+clean; 95 TiB data, 298 TiB used, 320 TiB / 618 TiB avail; 13 MiB/s >> rd, 3.9 MiB/s wr, 610 op/s; 403603/330817302 objects misplaced (0.122%); >> 1.1 MiB/s, 2 objects/s recovering >> >> As you can see, the rate is 2 Objects/s for over 400000 objects. `ceph >> orch >> osd rm status` shows long running draining processes (now over 4 days): >> >> OSD HOST STATE PGS REPLACE FORCE ZAP DRAIN STARTED AT >> 64 ceph05 draining 1 False False False 2022-12-11 >> 16:18:14.692636+00:00 >> … >> >> Is there y way to increase the speed of the draining/rebalancing? >> >> Thanks! >> Erich >> _______________________________________________ >> ceph-users mailing list -- ceph-users@xxxxxxx >> To unsubscribe send an email to ceph-users-leave@xxxxxxx >> > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx