> > Thank you, that explains indeed a few things! :-) > > Thanks for the feedback! This helps a lot in terms of things to optimize in the mClock profiles. But the underlying problem is that we see iowaits/slowdowns on the clients > while rebalancing. > I added some nvme storage and am moving the data off the regular hdd. In > earlier releases I could slow this down when there was load on the clients, > but now I don’t know how to do this: > It typically takes some time for both client and recovery/backfill traffic to stabilize. Recovery ops should eventually get throttled down to lower rates. So you could observe the progress for some time and see if there's an improvement as far as client ops are concerned. Could you run the following command on a couple of the osds from where data is being backfilled from? This I imagine would be osds with hdd as the backing device. $ ceph daemon osd.N config show | grep osd_mclock where N is the osd ID. The above would show the allocations in terms of IOPS for the different types of internal Ceph operations among other things. If there's no improvement over time, we can think of the next steps. -Sridhar <https://www.redhat.com> _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx