On 2020/10/12 16:13, Ming Lei wrote:
On Mon, Oct 12, 2020 at 11:59:21AM +0800, Chao Leng wrote:
On 2020/10/10 14:08, Yi Zhang wrote:
On 10/10/20 2:29 AM, Sagi Grimberg wrote:
On 10/9/20 6:55 AM, Yi Zhang wrote:
Hi Sagi
On 10/9/20 4:09 PM, Sagi Grimberg wrote:
Hi Sagi
I applied this patch on block origin/for-next and still can reproduce it.
That's unexpected, can you try this patch?
--
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 629b025685d1..46428ff0b0fc 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -2175,7 +2175,7 @@ static void nvme_tcp_complete_timed_out(struct request *rq)
/* fence other contexts that may complete the command */
mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
nvme_tcp_stop_queue(ctrl, nvme_tcp_queue_id(req->queue));
- if (!blk_mq_request_completed(rq)) {
+ if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) {
nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
blk_mq_complete_request_sync(rq);
}
This may just reduce the probability. The concurrency of timeout and teardown will cause the same request
be treated repeatly, this is not we expected.
That is right, not like SCSI, NVME doesn't apply atomic request completion, so
request may be completed/freed from both timeout & nvme_cancel_request().
.teardown_lock still may cover the race with Sagi's patch because teardown
actually cancels requests in sync style.
In extreme scenarios, the request may be already retry success(rq state change to inflight).
Timeout processing may wrongly stop the queue and abort the request.
teardown_lock serialize the process of timeout and teardown, but do not avoid the race.
It might not be safe.
In the teardown process, after quiesced queues delete the timer and cancel the timeout work maybe a better option.
Seems better solution, given it is aligned with NVME PCI's reset
handling. nvme_sync_queues() may be called in nvme_tcp_teardown_io_queues() to
avoid this race.
Thanks,
Ming
.