A deadlock happens When we test nvme over roce with link blink. The reason: link blink will cause error recovery, and then reconnect.If reconnect fail due to nvme_set_queue_count timeout, the reconnect process will set the queue count as 0 and continue , and then nvme_start_ctrl will call nvme_enable_aen, and deadlock happens because the admin queue is quiesced. log: Aug 3 22:47:24 localhost kernel: nvme nvme2: I/O 22 QID 0 timeout Aug 3 22:47:24 localhost kernel: nvme nvme2: Could not set queue count (881) stack: root 23848 0.0 0.0 0 0 ? D Aug03 0:00 [kworker/u12:4+nvme-wq] [<0>] blk_execute_rq+0x69/0xa0 [<0>] __nvme_submit_sync_cmd+0xaf/0x1b0 [nvme_core] [<0>] nvme_features+0x73/0xb0 [nvme_core] [<0>] nvme_start_ctrl+0xa4/0x100 [nvme_core] [<0>] nvme_rdma_setup_ctrl+0x438/0x700 [nvme_rdma] [<0>] nvme_rdma_reconnect_ctrl_work+0x22/0x30 [nvme_rdma] [<0>] process_one_work+0x1a7/0x370 [<0>] worker_thread+0x30/0x380 [<0>] kthread+0x112/0x130 [<0>] ret_from_fork+0x35/0x40 Many functions which call __nvme_submit_sync_cmd treat error code in two modes: If error code less than 0, treat as command failed. If erroe code more than 0, treat as target not support or other and continue. NVME_SC_HOST_ABORTED_CMD and NVME_SC_HOST_PATH_ERROR both are cancled io by host, is not the real error code return from target. So we need set the flag:NVME_REQ_CANCELLED. Thus __nvme_submit_sync_cmd translate the error to INTR, nvme_set_queue_count will return error, reconnect process will terminate instead of continue. Signed-off-by: Chao Leng <lengchao@xxxxxxxxxx> --- drivers/nvme/host/core.c | 1 + drivers/nvme/host/fabrics.c | 1 + 2 files changed, 2 insertions(+) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 43ac8a1ad65d..74f76aa78b02 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -307,6 +307,7 @@ bool nvme_cancel_request(struct request *req, void *data, bool reserved) if (blk_mq_request_completed(req)) return true; + nvme_req(req)->flags |= NVME_REQ_CANCELLED; nvme_req(req)->status = NVME_SC_HOST_ABORTED_CMD; blk_mq_complete_request(req); return true; diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index 4ec4829d6233..6c40054f9fb4 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -552,6 +552,7 @@ blk_status_t nvmf_fail_nonready_command(struct nvme_ctrl *ctrl, !blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH)) return BLK_STS_RESOURCE; + nvme_req(rq)->flags |= NVME_REQ_CANCELLED; nvme_req(rq)->status = NVME_SC_HOST_PATH_ERROR; blk_mq_start_request(rq); nvme_complete_rq(rq); -- 2.16.4