On Tue, Apr 11, 2017 at 06:18:36PM +0000, Bart Van Assche wrote: > On Tue, 2017-04-11 at 14:03 -0400, Mike Snitzer wrote: > > Rather than working so hard to use DM code against me, your argument > > should be: "blk-mq drivers X, Y and Z rerun the hw queue; this is a well > > established pattern" > > > > I see drivers/nvme/host/fc.c:nvme_fc_start_fcp_op() does. But that is > > only one other driver out of ~20 BLK_MQ_RQ_QUEUE_BUSY returns > > tree-wide. > > > > Could be there are some others, but hardly a well-established pattern. > > Hello Mike, > > Several blk-mq drivers that can return BLK_MQ_RQ_QUEUE_BUSY from their > .queue_rq() implementation stop the request queue (blk_mq_stop_hw_queue()) > before returning "busy" and restart the queue after the busy condition has > been cleared (blk_mq_start_stopped_hw_queues()). Examples are virtio_blk and > xen-blkfront. However, this approach is not appropriate for the dm-mq core > nor for the scsi core since both drivers already use the "stopped" state for > another purpose than tracking whether or not a hardware queue is busy. Hence > the blk_mq_delay_run_hw_queue() and blk_mq_run_hw_queue() calls in these last > two drivers to rerun a hardware queue after the busy state has been cleared. But looks this patch just reruns the hw queue after 100ms, which isn't that after the busy state has been cleared, right? Actually if BLK_MQ_RQ_QUEUE_BUSY is returned from .queue_rq(), blk-mq will buffer this request into hctx->dispatch and run the hw queue again, so looks blk_mq_delay_run_hw_queue() in this situation shouldn't have been needed at my 1st impression. Or maybe Bart has more stories about this usage, better to comments it? Thanks, Ming -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel