Re: Distribution of performance under load.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/17/2013 11:03 PM, Gregory Farnum wrote:
On Thu, Oct 17, 2013 at 6:19 AM, Robert van Leeuwen
<Robert.vanLeeuwen@xxxxxxxxxxxxx> wrote:
Hi,

I'm wondering how Ceph behaves when there are multiple sources writing heavily to the same pool (e.g. Openstack nova compute)
Will each get its own "fair share" or will a very heavy user impact all others?

Ceph doesn't do any real QoS, so a heavy user can impact others in
terms of request latencies. The OSD makes some attempts to dispatch
client messages fairly once they arrive, though, so a single heavy
writer can't starve out other users. (IOPs are distributed via
round-robin, which with standard striping policies are going to be the
dominating expense.)

Are there ways to tune this?

I think there's some stuff you can apply to librbd at the client level
(Josh?), but there's unfortunately not much you can do server-side
right now.

If you are running with KVM (which you do with OpenStack) you should look at the iotune options of libvirt.

I don't know if OpenStack supports that, but for example CloudStack 4.2 got support for this.

This way you can do QoS on the IOps on the  client-side.

-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux