Re: Ceph Block Storage QoS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/08/2013 08:58 AM, Josh Durgin wrote:
On 11/08/2013 03:13 PM, james@xxxxxxxxxxxx wrote:

On 2013-11-08 03:20, Haomai Wang wrote:
On Fri, Nov 8, 2013 at 9:31 AM, Josh Durgin <josh.durgin@xxxxxxxxxxx>
wrote:

I just list commands below to help users to understand:

cinder qos-create high_read_low_write consumer="front-end"
read_iops_sec=1000 write_iops_sec=10


Does this have any normalisation of the IO units, for example to 8K or
something?  In VMware we have similar controls for ages but they're not
useful, as a Windows server will through out 4MB IO's and skew all the
metrics.

I don't think it does any normalization, but you could have different
limits for different volume types, and use one volume type for windows
and one volume type for non-windows. This might not make sense for all
deployments, but it may be a usable workaround for that issue.


It is supported by Qemu. You can set both IOps, but also bandwidth for read, write or total.

I don't know if OpenStack supports it though.

Josh

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux