Considering https://github.com/ceph/ceph/blob/f6edcef6efe209e8947887752bd2b833d0ca13b7/src/osd/OSD.cc#L10086, the OSD: - always sets and updates its per osd osd_mclock_max_capacity_iops_{hdd,ssd} when the benchmark occurs and its measured iops is below or equal to osd_mclock_iops_capacity_threshold_{hdd,ssd} but - doesn't remove osd_mclock_max_capacity_iops_{hdd,ssd} when the measured iops exceeds osd_mclock_iops_capacity_threshold_{hdd,ssd} (500 for HDD and 80.000 for SSD) and the current value for osd_mclock_max_capacity_iops_{hdd,ssd} is set below its default (315 for HDD and 21500 for SSD) Thus per osd osd_mclock_max_capacity_iops_hdd sometimes being set as low as 0.145327 (as per Michel's post) and never being updated afterwards leading to performance issues. The idea of a minimum threshold below which osd_mclock_iops_capacity_threshold_{hdd,ssd} should not be set seems relevant. CC'ing Sridhar to have his thoughts. Cheers, Frédéric. ----- Le 22 Mar 24, à 19:37, Kai Stian Olstad ceph+list@xxxxxxxxxx a écrit : > On Fri, Mar 22, 2024 at 06:51:44PM +0100, Frédéric Nass wrote: >> >>> The OSD run bench and update osd_mclock_max_capacity_iops_{hdd,ssd} every time >>> the OSD is started. >>> If you check the OSD log you'll see it does the bench. >> >>Are you sure about the update on every start? Does the update happen only if the >>benchmark result is < 500 iops? >> >>Looks like the OSD does not remove any set configuration when the benchmark >>result is > 500 iops. Otherwise, the extremely low value that Michel reported >>earlier (less than 1 iops) would have been updated over time. >>I guess. > > I'm not completely sure, it's a couple a month since I used mclock, have switch > back to wpq because of a nasty bug in mclock that can freeze cluster I/O. > > It could be because I was testing osd_mclock_force_run_benchmark_on_init. > The OSD had DB on SSD and data on HDD, so the measured to about 1700 IOPS and > was ignored because of the 500 limit. > So only the SSD got the osd_mclock_max_capacity_iops_ssd set. > > -- > Kai Stian Olstad _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx