Re: How mClock profile calculation works, and IOPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sridhar

Thanks for the information.

> 
> The above values are a result of distributing the IOPS across all the OSD
> shards as defined by the
> osd_op_num_shards_[hdd|ssd] option. For HDDs, this is set to 5 and
> therefore the IOPS will be
> distributed across the 5 shards (i.e. for e.g., 675/5 for
> osd_mclock_scheduler_background_recovery_lim
> and so on for other reservation and limit options).

Why was it done that way? I do not understand the reason why distributing the IOPS accross different disks, when the measurement we have is for one disk alone. This means with default parameters we will always be far from reaching OSD limit right?

Luis Domingues
Proton AG


------- Original Message -------
On Monday, April 3rd, 2023 at 07:43, Sridhar Seshasayee <sseshasa@xxxxxxxxxx> wrote:


> Hi Luis,
> 
> 
> I am reading reading some documentation about mClock and have two questions.
> 
> > First, about the IOPS. Are those IOPS disk IOPS or other kind of IOPS? And
> > what the assumption of those? (Like block size, sequential or random
> > reads/writes)?
> 
> 
> This is the result of testing running OSD bench random writes at 4 KiB
> block size.
> 
> > But what I get is:
> > 
> > "osd_mclock_scheduler_background_best_effort_lim": "999999",
> > "osd_mclock_scheduler_background_best_effort_res": "18",
> > "osd_mclock_scheduler_background_best_effort_wgt": "2",
> > "osd_mclock_scheduler_background_recovery_lim": "135",
> > "osd_mclock_scheduler_background_recovery_res": "36",
> > "osd_mclock_scheduler_background_recovery_wgt": "1",
> > "osd_mclock_scheduler_client_lim": "90",
> > "osd_mclock_scheduler_client_res": "36",
> > "osd_mclock_scheduler_client_wgt": "1",
> > 
> > Which seems very low according to what my disk seems to be able to handle.
> > 
> > Is this calculation the expected one? Or did I miss something on how those
> > profiles are populated?
> 
> 
> The above values are a result of distributing the IOPS across all the OSD
> shards as defined by the
> osd_op_num_shards_[hdd|ssd] option. For HDDs, this is set to 5 and
> therefore the IOPS will be
> distributed across the 5 shards (i.e. for e.g., 675/5 for
> osd_mclock_scheduler_background_recovery_lim
> and so on for other reservation and limit options).
> 
> -Sridhar
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux