Qemu iotune values for RBD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

We're about to go live with some qemu  rate limiting to RBD, and I wanted to crosscheck our values with this list, in case someone can chime in with their experience or known best practices.

The only reasonable, non test-suite,  values I found on the web are:

iops_wr 200
iops_rd 400
bps_wr 40000000
bps_rd 80000000

and those seem (to me) to offer a "pretty good" service level, with more iops than a typical disk yet lower throughput (which is good considering our single gigabit NICs on the hypervisors).

Our main goal for the rate limiting is to protect the cluster from abusive users running fio, etc., while not overly restricting our varied legitimate applications.

Any opinions here?

Cheers, Dan

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux