Re: osd_mclock_max_capacity_iops_hdd in Reef

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Hi Luis,

> We are testing migrations from a cluster running Pacific to Reef. In
> pacific we needed to tweak osd_mclock_max_capacity_iops_hdd to have decent
> performances of ou cluster.

It would be helpful to know the procedure you are employing for the

> But in reef it looks like changing the value of
> osd_mclock_max_capacity_iops_hdd does not impact cluster performances. Did
> osd_mclock_max_capacity_iops_hdd became useless?

"osd_mclock_max_capacity_iops_hdd" is still valid in Reef as long as it
accurately represents the capability of the underlying OSD device for the
intended workload.

Between Pacific and Reef many improvements to the mClock feature have been
made. An important change relates to the automatic determination of cost
per I/O which is now tied to the sequential and random IOPS capability of
the underlying device of an OSD. As long as
"osd_mclock_max_capacity_iops_hdd" and
"osd_mclock_max_sequential_bandwidth_hdd" represent a fairly accurate
capability of the backing OSD device, the performance should be along
expected lines. Changing the "osd_mclock_max_capacity_iops_hdd" to a value
that is beyond the capability of the device will obviously not yield any

If the above parameters are representative of the capability of the backing
OSD device and you still see lower than expected performance, then it could
be some other issue that needs looking into.
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux