Re: osd_mclock_max_capacity_iops_hdd in Reef

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Hi Luis,

What we observed, is that with Pacific, tweaking
> osd_mclock_max_capacity_iops_hdd, we can go from arround 200MB/s of writes
> up to 600MB/s of writes, on balanced profile.
> But with Reef, changing osd_mclock_max_capacity_iops_hdd does not change a
> lot the performances of the cluster. (Or if it does, they are small enough
> so I did not see them).

The above probably indicates that the default values for
osd_mclock_max_capacity_iops_hdd are close enough to the actual capability
of the backing device.

> That been said, the performances of Reef "out of the box" are what we
> expect of our cluster (arround 600MB/s), while with Pacific we needed to
> tweak manually osd_mclock_max_capacity_iops_hdd to get the expected
> performances. So there is definitely a big improvement there.

This is good feedback. One of our goals was to achieve a hands-free
configuration of mClock and fine tune only when necessary.

> What made me think that this option was maybe not used anymore, during the
> deploy of Pacific, each OSD pushes its own
> osd_mclock_max_capacity_iops_hdd, but deploying Reef not. We did not see
> any values for the OSDs in the ceph config db.

The fact that you don't see any values in the config db indicates that the
default values are in effect. We added a fallback mechanism to use the
default values in case the benchmark test during OSD boot-up returned
unrealistic values. Please see
for more details and awareness around this. In your case, the configuration
may be left as is since the defaults are giving you the expected
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux