Hi Alexandre, On Mon, Aug 22, 2022 at 4:03 PM Alexandre Courbot <acourbot@xxxxxxxxxxxx> wrote: > > Hi Suwan, apologies for taking so long to come back to this. > > On Tue, Aug 2, 2022 at 11:50 PM Kim Suwan <suwan.kim027@xxxxxxxxx> wrote: > > > > Hi Alexandre > > > > On Tue, Aug 2, 2022 at 11:12 AM Alexandre Courbot <acourbot@xxxxxxxxxxxx> wrote: > > > > > > Hi Suwan, > > > > > > Thanks for the fast reply! > > > > > > On Tue, Aug 2, 2022 at 1:55 AM Kim Suwan <suwan.kim027@xxxxxxxxx> wrote: > > > > > > > > Hi Alexandre, > > > > > > > > Thanks for reporting the issue. > > > > > > > > I think a possible scenario is that request fails at > > > > virtio_queue_rqs() and it is passed to normal path (virtio_queue_rq). > > > > > > > > In this procedure, It is possible that blk_mq_start_request() > > > > was called twice changing request state from MQ_RQ_IN_FLIGHT to > > > > MQ_RQ_IN_FLIGHT. > > > > > > I have checked whether virtblk_prep_rq_batch() within > > > virtio_queue_rqs() ever returns 0, and it looks like it never happens. > > > So as far as I can tell all virtio_queue_rqs() are processed > > > successfully - but maybe the request can also fail further down the > > > line? Is there some extra instrumentation I can do to check that? > > > > > > > I'm looking at one more suspicious code. > > If virtblk_add_req() fails within virtblk_add_req_batch(), > > virtio_queue_rqs() passes the failed request to the normal path also > > (virtio_queue_rq). Then, it can call blk_mq_start_request() twice. > > > > Because I can't reproduce the issue on my vm, Could you test > > the below patch? > > I defer the blk_mq_start_request() call after virtblk_add_req() > > to ensure that we call blk_mq_start_request() after all the > > preparations finish. > > Your patch seems to solve the problem! I am not seeing the warning > anymore and the block device looks happy. Good news! Thanks for the test! > Let me know if I can do anything else. Could you test one more patch? I move blk_mq_start_request(req) before spinlock() to reduce time holding the lock within virtio_queue_rq(). If it is ok, I will send the patch. Regards, Suwan Kim --- diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 30255fcaf181..73a0620a7cff 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -322,8 +322,6 @@ static blk_status_t virtblk_prep_rq(struct blk_mq_hw_ctx *hctx, if (unlikely(status)) return status; - blk_mq_start_request(req); - vbr->sg_table.nents = virtblk_map_data(hctx, req, vbr); if (unlikely(vbr->sg_table.nents < 0)) { virtblk_cleanup_cmd(req); @@ -349,6 +347,8 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx, if (unlikely(status)) return status; + blk_mq_start_request(req); + spin_lock_irqsave(&vblk->vqs[qid].lock, flags); err = virtblk_add_req(vblk->vqs[qid].vq, vbr); if (err) { @@ -409,6 +409,8 @@ static bool virtblk_add_req_batch(struct virtio_blk_vq *vq, virtblk_unmap_data(req, vbr); virtblk_cleanup_cmd(req); rq_list_add(requeue_list, req); + } else { + blk_mq_start_request(req); } }