Re: QoS Control for RBD I/Os?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



At some point you still need to do the actual promotion of the block
from the base image to the child image. The only time you get a
conflict is if there's a client op trying to happen at the same time.
So I guess you could try to avoid starting any promotion while there
are client ops pending, but if they come in during the promote they're
still stuck until it finishes.

There is a mechanism which lets you set different priorities on
messages and operations that result from them; you could perhaps try
and hack things up so that the client performing the flatten assigns a
much lower priority to its messages than regular rbd clients do.
Search for the CEPH_MSG_PRIO* stuff in the code base if you want to
explore that (and it will involve some coding; I don't think we have
any config options that will make it easy).
-Greg

On Thu, Jan 15, 2015 at 10:06 AM, Cheng Cheng <ccheng.leo@xxxxxxxxx> wrote:
> Hi Greg,
>
> Thanks for sharing the insight.
>
> If we limit the scope of QoS to a RBD image, some QoS mechanism will
> still be beneficial. As there is an effort to asynchronously flattern
> a image, the priority for flattern operation should be lower than
> normal I/O to limit the latency, right?
>
> Cheng
> Cheng
>
>
>
> On Thu, Jan 15, 2015 at 12:59 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
>> On Thu, Jan 15, 2015 at 9:53 AM, Cheng Cheng <ccheng.leo@xxxxxxxxx> wrote:
>>> Hi Ceph,
>>>
>>> I am wondering is there a mechanism to prioritize the rbd_aio_write/rbd_aio_read I/Os? Currently all RBD I/Os are issued in FIFO to rados layer, and there is NO QoS mechanism to control the priority of these I/Os.
>>>
>>> A QoS mechanism will be beneficial when performing certain management operations, such as flatten. When flatten a image, the outstanding I/Os do get throttled by “rbd_concurrent_management_ops”. However this won't guarantee normal I/Os are not affected, as outstanding normal I/Os are still competing with concurrent management ops.
>>>
>>> Anyone know how/where to implement this QoS mechanism?
>>
>> Sadly, there's no QoS in RBD right now. It sounds like you're more
>> concerned about preventing management from impacting client IO, but
>> even in this case it's a distributed problem (flatten for instance
>> involves moving data between storage machines), which is still an open
>> research topic as far as I know. I don't think you'll ever be able to
>> get a guarantee, and simply doing one movement at a time is probably
>> as good as you can get in the medium term. :(
>> -Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux