Re: QoS Control for RBD I/Os?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Cheng cheng,

There already exists a
bp(wiki.ceph.com/Planning/Blueprints/Giant/Add_QoS_capacity_to_librbd)
but still no one to implement it.

If you are interested in this, I think we can make it father. If not,
I would like to pick it up later. :-)


On Fri, Jan 16, 2015 at 1:53 AM, Cheng Cheng <ccheng.leo@xxxxxxxxx> wrote:
> Hi Ceph,
>
> I am wondering is there a mechanism to prioritize the rbd_aio_write/rbd_aio_read I/Os? Currently all RBD I/Os are issued in FIFO to rados layer, and there is NO QoS mechanism to control the priority of these I/Os.
>
> A QoS mechanism will be beneficial when performing certain management operations, such as flatten. When flatten a image, the outstanding I/Os do get throttled by “rbd_concurrent_management_ops”. However this won't guarantee normal I/Os are not affected, as outstanding normal I/Os are still competing with concurrent management ops.
>
> Anyone know how/where to implement this QoS mechanism?
>
> Thanks!
> Cheng
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Best Regards,

Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux