Re: osd_mclock_max_capacity_iops_hdd in Reef

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sridhar. Thanks for your reply:


> > We are testing migrations from a cluster running Pacific to Reef. In
> > pacific we needed to tweak osd_mclock_max_capacity_iops_hdd to have decent
> > performances of ou cluster.
>
> It would be helpful to know the procedure you are employing for the
> migration.

For now we run some benchmarks on a fairly small dev/test cluster. It has been deployed using cephadm and updated with cephadm from Pacific to Reef.

What we observed, is that with Pacific, tweaking osd_mclock_max_capacity_iops_hdd, we can go from arround 200MB/s of writes up to 600MB/s of writes, on balanced profile.
But with Reef, changing osd_mclock_max_capacity_iops_hdd does not change a lot the performances of the cluster. (Or if it does, they are small enough so I did not see them).

That been said, the performances of Reef "out of the box" are what we expect of our cluster (arround 600MB/s), while with Pacific we needed to tweak manually osd_mclock_max_capacity_iops_hdd to get the expected performances. So there is definitely a big improvement there.

What made me think that this option was maybe not used anymore, during the deploy of Pacific, each OSD pushes its own osd_mclock_max_capacity_iops_hdd, but deploying Reef not. We did not see any values for the OSDs in the ceph config db.

In conclusion, we could say, at least on our pre-update tests, that mClock seems to behave a lot better in Reef than in Pacific.

Luis Domingues
Proton AG


On Monday, 8 January 2024 at 12:29, Sridhar Seshasayee <sseshasa@xxxxxxxxxx> wrote:


> Hi Luis,
> 
> > We are testing migrations from a cluster running Pacific to Reef. In
> > pacific we needed to tweak osd_mclock_max_capacity_iops_hdd to have decent
> > performances of ou cluster.
> 
> 
> It would be helpful to know the procedure you are employing for the
> migration.
> 
> > But in reef it looks like changing the value of
> > osd_mclock_max_capacity_iops_hdd does not impact cluster performances. Did
> > osd_mclock_max_capacity_iops_hdd became useless?
> 
> 
> "osd_mclock_max_capacity_iops_hdd" is still valid in Reef as long as it
> accurately represents the capability of the underlying OSD device for the
> intended workload.
> 
> Between Pacific and Reef many improvements to the mClock feature have been
> made. An important change relates to the automatic determination of cost
> per I/O which is now tied to the sequential and random IOPS capability of
> the underlying device of an OSD. As long as
> "osd_mclock_max_capacity_iops_hdd" and
> "osd_mclock_max_sequential_bandwidth_hdd" represent a fairly accurate
> capability of the backing OSD device, the performance should be along
> expected lines. Changing the "osd_mclock_max_capacity_iops_hdd" to a value
> that is beyond the capability of the device will obviously not yield any
> improvement.
> 
> If the above parameters are representative of the capability of the backing
> OSD device and you still see lower than expected performance, then it could
> be some other issue that needs looking into.
> -Sridhar
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux