On Thu, Aug 20, 2020 at 11:54:06AM +0800, Chao Leng wrote: > A deadlock happens When we test nvme over roce with link blink. The > reason: link blink will cause error recovery, and then reconnect.If > reconnect fail due to nvme_set_queue_count timeout, the reconnect > process will set the queue count as 0 and continue , and then > nvme_start_ctrl will call nvme_enable_aen, and deadlock happens > because the admin queue is quiesced. > > log: > Aug 3 22:47:24 localhost kernel: nvme nvme2: I/O 22 QID 0 timeout > Aug 3 22:47:24 localhost kernel: nvme nvme2: Could not set queue count > (881) > stack: > root 23848 0.0 0.0 0 0 ? D Aug03 0:00 > [kworker/u12:4+nvme-wq] > [<0>] blk_execute_rq+0x69/0xa0 > [<0>] __nvme_submit_sync_cmd+0xaf/0x1b0 [nvme_core] > [<0>] nvme_features+0x73/0xb0 [nvme_core] > [<0>] nvme_start_ctrl+0xa4/0x100 [nvme_core] > [<0>] nvme_rdma_setup_ctrl+0x438/0x700 [nvme_rdma] > [<0>] nvme_rdma_reconnect_ctrl_work+0x22/0x30 [nvme_rdma] > [<0>] process_one_work+0x1a7/0x370 > [<0>] worker_thread+0x30/0x380 > [<0>] kthread+0x112/0x130 > [<0>] ret_from_fork+0x35/0x40 > > Many functions which call __nvme_submit_sync_cmd treat error code in two > modes: If error code less than 0, treat as command failed. If erroe code > more than 0, treat as target not support or other and continue. > NVME_SC_HOST_ABORTED_CMD and NVME_SC_HOST_PATH_ERROR both are cancled io > by host, is not the real error code return from target. So we need set > the flag:NVME_REQ_CANCELLED. Thus __nvme_submit_sync_cmd translate > the error to INTR, nvme_set_queue_count will return error, reconnect > process will terminate instead of continue. But we could still race with a real completion. I suspect the right answer is to translate NVME_SC_HOST_ABORTED_CMD and NVME_SC_HOST_PATH_ERROR to a negative error code in __nvme_submit_sync_cmd.