Hi,
Thanks a lot for the clarification, so we will adapt or setup
using custom profile with the proposed parameters.
Kr
Philippe
On 9/11/22 14:04, Aishwarya Mathuria wrote:
Hello Philippe,
Your understanding is correct, 50% of IOPS are reserved for client
operations.
osd_mclock_max_capacity_iops_hdd defines the capacity per OSD.
There is a mClock queue for each OSD shard. The number of shards are
defined by osd_op_num_shards_hdd
<https://docs.ceph.com/en/latest/rados/configuration/osd-config-ref/#confval-osd_op_num_shards_hdd)> which by default is set to 5.
So each queue has osd_mclock_max_capacity_iops_hdd/osd_op_num_shards_hdd
IOPS.
In your case, this would mean that the capacity of each mClock queue is
equal to 4578 IOPS (22889/5)
This would make osd_mclock_scheduler_client_res = 2289 (50%)
I hope that explains the numbers you are seeing.
We also have a few optimizations
<https://github.com/ceph/ceph/pull/48226/files> coming with regards to
the mClock profiles where we are increasing the reservation for client
operations in the high client profile. This change will also address the
inflated osd capacity numbers that are encountered in some cases with
osd bench.
Let me know if you have any other questions.
Regards,
Aishwarya
On Wed, Nov 9, 2022 at 1:46 PM philippe <philippe.vanhecke@xxxxxxxxx
<mailto:philippe.vanhecke@xxxxxxxxx>> wrote:
Hi,
We have a quincy 17.2.5 based cluster, and we have some question
regarding the mclock iops scheduler.
Looking into the documentation, the default profile is the
HIGH_CLIENT_OPS
that mean that 50% of IOPS for an OSD are reserved for clients
operations.
But looking into OSD configuration settings, it seems that this is not
the case or probably there is something I don't understand.
ceph config get osd.0 osd_mclock_profile
high_client_ops
ceph config show osd.0 | grep mclock
osd_mclock_max_capacity_iops_hdd 22889.222997 mon
osd_mclock_scheduler_background_best_effort_lim 999999 default
osd_mclock_scheduler_background_best_effort_res 1144
default
osd_mclock_scheduler_background_best_effort_wgt 2
default
osd_mclock_scheduler_background_recovery_lim 4578
default
osd_mclock_scheduler_background_recovery_res 1144
default
osd_mclock_scheduler_background_recovery_wgt 1
default
osd_mclock_scheduler_client_lim 999999
default
osd_mclock_scheduler_client_res 2289
default
osd_mclock_scheduler_client_wgt 2
default
So i have osd_mclock_max_capacity_iops_hdd = 22889.222997 why
osd_mclock_scheduler_client_res is not 11444 ?
this value seem strange to me.
Kr
Philippe
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx
<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx