how ceph OSD bench works?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I am currently testing some new disks, doing some benchmarks and stuff, and I would like to undertand how the OSD bench works.

If I quicky explain our setup, we have a small ceph cluster, where our new disks are inserted. And we have some pools with no replication at all, and 1 PG only, up-mapped to those new disks. So I can do some benchmarks on them.

The thing that is odd, is that doing some tests with fio tool, I have similar results on all disks, and doing the rados bench during 5 minutes as well. But the OSD bench at startup of the OSD, for mClock to configure osd_mclock_max_capacity_iops_hdd gives me a very big difference between disks. (600 vs 2200).

I am running Pacific on this test cluster.

Is there anywhere documentation of how this works? Or if anyone could explain that would be great.

I did not found any documentation on how OSD benchmark works, only how to used it. But playing a little bit with it, it seems the results we get is highly dependent on the block sizes we use. Same for rados bench, results are dependent, at least on my tests, of the block size we use, which I found a little bit weird to be honest.

And as mClock depends on that, it is impactful performance wise. On our cluster we can reach a lot better performances if we teak those values, instead of letting the cluster do proper measurements. And this looks to impact certain disk vendors more than others.

Luis Domingues
Proton AG
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux