Re: capacity planning - iops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Are you talking about global IOPS or per-VM/per-RBD device?
And at what queue depth?
It all comes down to latency. Not sure what the numbers can be on recent versions of Ceph and on modern OSes but I doubt it will be <1ms for the OSD daemon alone. That gives you 1000 real synchronous IOPS. With higher queue depths (or with more RBD devices in parallel) you could reach higher numbers, but you need to know what you application needs.
For SATA drives, you need to add their latency to this number, and it scales only when the writes are distributed to all the drives (so if you hammer a 4k region it will still hit the same drives, even with higher queue depth, which might/or might not, increase throughput or even make it worse...)

Jan


On 19 Sep 2016, at 16:23, Matteo Dacrema <mdacrema@xxxxxxxx> wrote:

Hi All,

I’m trying to estimate how many iops ( 4k direct random write )  my ceph cluster should deliver.
I’ve Journal on SSDs and SATA 7.2k drives for OSD.

The question is: does journal on SSD increase the number of maximum write iops or I need to consider only the IOPS provided by SATA drives divided by replica count?

Regards
M.



This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux