On Tue, Oct 18, 2011 at 5:56 PM, Kevin Wolf <kwolf@xxxxxxxxxx> wrote: > Am 18.10.2011 11:29, schrieb Zhi Yong Wu: >>>>>>>> +void qemu_del_block_queue(BlockQueue *queue) >>>>>>>> +{ >>>>>>>> + BlockQueueAIOCB *request, *next; >>>>>>>> + >>>>>>>> + QTAILQ_FOREACH_SAFE(request, &queue->requests, entry, next) { >>>>>>>> + QTAILQ_REMOVE(&queue->requests, request, entry); >>>>>>>> + qemu_aio_release(request); >>>>>>>> + } >>>>>>>> + >>>>>>>> + g_free(queue); >>>>>>>> +} >>>>>>> >>>>>>> Can we be sure that no AIO requests are in flight that still use the now >>>>>>> released AIOCB? >>>>>> Yeah, since qemu core code is serially performed, i think that when >>>>>> qemu_del_block_queue is performed, no requests are in flight. Right? >>>>> >>>>> Patch 2 has this code: >>>>> >>>>> +void bdrv_io_limits_disable(BlockDriverState *bs) >>>>> +{ >>>>> + bs->io_limits_enabled = false; >>>>> + >>>>> + if (bs->block_queue) { >>>>> + qemu_block_queue_flush(bs->block_queue); >>>>> + qemu_del_block_queue(bs->block_queue); >>>>> + bs->block_queue = NULL; >>>>> + } >>>>> >>>>> Does this mean that you can't disable I/O limits while the VM is running? >>>> NO, you can even though VM is running. >>> >>> Okay, in general qemu_block_queue_flush() empties the queue so that >>> there are no requests left that qemu_del_block_queue() could drop from >>> the queue. So in the common case it doesn't even enter the FOREACH loop. >> I think that we should adopt !QTAILQ_EMPTY(&queue->requests), not >> QTAILQ_FOREACH_SAFE in qemu_del_block_queue(), >> right? > > I think QTAILQ_FOREACH_SAFE is fine. > >>> >>> I think problems start when requests have failed or exceeded the limit >>> again, then you have requests queued even after >>> qemu_block_queue_flush(). You must be aware of this, otherwise the code >>> in qemu_del_block_queue() wouldn't exist. >>> >>> But you can't release the ACBs without having called their callback, >>> otherwise the caller would still assume that its ACB pointer is valid. >>> Maybe calling the callback before releasing the ACB would be enough. >> Good, thanks. >>>>> >>>>>>>> + } >>>>>>>> + } >>>>>>>> + } >>>>>>>> + >>>>>>>> + queue->req_failed = true; >>>>>>>> + queue->flushing = false; >>>>>>>> +} >>>>>>>> + >>>>>>>> +bool qemu_block_queue_has_pending(BlockQueue *queue) >>>>>>>> +{ >>>>>>>> + return !queue->flushing && !QTAILQ_EMPTY(&queue->requests); >>>>>>>> +} >>>>>>> >>>>>>> Why doesn't the queue have pending requests in the middle of a flush >>>>>>> operation? (That is, the flush hasn't completed yet) >>>>>> It is possible for the queue to have pending requests. if yes, how about? >>>>> >>>>> Sorry, can't parse this. >>>>> >>>>> I don't understand why the !queue->flushing part is correct. >>> >>> What about this? >> When bdrv_aio_readv/writev handle one request, it will determine if >> block queue is not being flushed and isn't NULL; if yes, It assume >> that this request is one new request from upper layer, so it won't >> determine if the I/O rate at runtime has exceeded the limits, but >> immediately insert it into block queue. > > Hm, I think I understand what you're saying, but only after looking at > patch 3. This is not really implementing a has_pending(), but > has_pending_and_caller_wasnt_called_during_flush(). I think it would be Correct. > better to handle the queue->flushing condition in the caller where its > use is more obvious. OK. i will do as this. > > Kevin > -- Regards, Zhi Yong Wu -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html