On Mon, Mar 16, 2020 at 08:26:35PM +0800, Yufen Yu wrote: > Ping and Cc to more expert in blk-mq. > > On 2020/3/3 21:08, Yufen Yu wrote: > > Our test robot reported a warning for refcount_dec trying to decrease > > value '0'. The reason is that blk_mq_dispatch_rq_list() try to complete > > the failed request from nbd driver, while the request have finished in > > nbd timeout handle function. The race as following: > > > > CPU1 CPU2 > > > > //req->ref = 1 > > blk_mq_dispatch_rq_list > > nbd_queue_rq > > nbd_handle_cmd > > blk_mq_start_request > > blk_mq_check_expired > > //req->ref = 2 > > blk_mq_rq_timed_out > > nbd_xmit_timeout This shouldn't happen in reality, given rq->deadline is just updated in blk_mq_start_request(), suppose you use the default 30 sec timeout. How can the race be triggered in so short time? Could you explain a bit your test case? > > blk_mq_complete_request > > //req->ref = 1 > > refcount_dec_and_test(&req->ref) > > > > refcount_dec_and_test(&req->ref) > > //req->ref = 0 > > __blk_mq_free_request(req) > > ret = BLK_STS_IOERR > > blk_mq_end_request > > // req->ref = 0, req have been free > > refcount_dec_and_test(&rq->ref) > > > > In fact, the bug also have been reported by syzbot: > > https://lkml.org/lkml/2018/12/5/1308 > > > > Since the request have been freed by timeout handle, it can be reused > > by others. Then, blk_mq_end_request() may get the re-initialized request > > and free it, which is unexpected. > > > > To fix the problem, we move blk_mq_start_request() down until the driver > > will handle the request actully. If .queue_rq return something error in > > preparation phase, timeout handle may don't need. Thus, moving start > > request down may be more reasonable. Then, nbd_queue_rq() will not return > > BLK_STS_IOERR after starting request. > > > > Reported-by: Hulk Robot <hulkci@xxxxxxxxxx> > > Signed-off-by: Yufen Yu <yuyufen@xxxxxxxxxx> > > --- > > drivers/block/nbd.c | 6 ++---- > > 1 file changed, 2 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c > > index 78181908f0df..5256e9d02a03 100644 > > --- a/drivers/block/nbd.c > > +++ b/drivers/block/nbd.c > > @@ -541,6 +541,8 @@ static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index) > > return -EIO; > > } > > + blk_mq_start_request(req); > > + > > if (req->cmd_flags & REQ_FUA) > > nbd_cmd_flags |= NBD_CMD_FLAG_FUA; > > @@ -879,7 +881,6 @@ static int nbd_handle_cmd(struct nbd_cmd *cmd, int index) > > if (!refcount_inc_not_zero(&nbd->config_refs)) { > > dev_err_ratelimited(disk_to_dev(nbd->disk), > > "Socks array is empty\n"); > > - blk_mq_start_request(req); I think it is fine to not start request in case of failure, given __blk_mq_end_request() doesn't check rq's state. Thanks, Ming