Re: [PATCH] block: re-introduce blk_mq_complete_request_sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



rdma also need do this patch.
We do test this patch with nvme over roce a few days, now work well.

On 2020/10/14 9:08, Ming Lei wrote:
On Tue, Oct 13, 2020 at 03:36:08PM -0700, Sagi Grimberg wrote:

This may just reduce the probability. The concurrency of timeout
and teardown will cause the same request
be treated repeatly, this is not we expected.

That is right, not like SCSI, NVME doesn't apply atomic request
completion, so
request may be completed/freed from both timeout & nvme_cancel_request().

.teardown_lock still may cover the race with Sagi's patch because
teardown
actually cancels requests in sync style.
In extreme scenarios, the request may be already retry success(rq state
change to inflight).
Timeout processing may wrongly stop the queue and abort the request.
teardown_lock serialize the process of timeout and teardown, but do not
avoid the race.
It might not be safe.

Not sure I understand the scenario you are describing.

what do you mean by "In extreme scenarios, the request may be already retry
success(rq state change to inflight)"?

What will retry the request? only when the host will reconnect
the request will be retried.

We can call nvme_sync_queues in the last part of the teardown, but
I still don't understand the race here.

Not like SCSI, NVME doesn't complete request atomically, so double
completion/free can be done from both timeout & nvme_cancel_request()(via teardown).

Given request is completed remotely or asynchronously in the two code paths,
the teardown_lock can't protect the case.

One solution is to apply the introduced blk_mq_complete_request_sync()
in both two code paths.

Another candidate is to use nvme_sync_queues() before teardown, such as
the following change:

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index d6a3e1487354..dc3561ca0074 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1909,6 +1909,7 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
  	blk_mq_quiesce_queue(ctrl->admin_q);
  	nvme_start_freeze(ctrl);
  	nvme_stop_queues(ctrl);
+	nvme_sync_queues(ctrl);
nvme_sync_queues now sync io queues and admin queues, so we may need like this:
nvme_sync_io_queues(struct nvme_ctrl *ctrl)
{
	down_read(&ctrl->namespaces_rwsem);
	list_for_each_entry(ns, &ctrl->namespaces, list)
		blk_sync_queue(ns->queue);
	up_read(&ctrl->namespaces_rwsem);
}
EXPORT_SYMBOL_GPL(nvme_sync_io_queues);

void nvme_sync_queues(struct nvme_ctrl *ctrl)
{
	struct nvme_ns *ns;

    nvme_sync_io_queues(ctrl);

	if (ctrl->admin_q)
		blk_sync_queue(ctrl->admin_q);
}
EXPORT_SYMBOL_GPL(nvme_sync_queues);
  	nvme_tcp_stop_io_queues(ctrl);
  	if (ctrl->tagset) {
  		blk_mq_tagset_busy_iter(ctrl->tagset,
@@ -2171,14 +2172,11 @@ static void nvme_tcp_complete_timed_out(struct request *rq)
  	struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
  	struct nvme_ctrl *ctrl = &req->queue->ctrl->ctrl;
- /* fence other contexts that may complete the command */
-	mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
  	nvme_tcp_stop_queue(ctrl, nvme_tcp_queue_id(req->queue));
  	if (!blk_mq_request_completed(rq)) {
  		nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
  		blk_mq_complete_request(rq);
  	}
-	mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
  }
static enum blk_eh_timer_return


Thanks,
Ming

.




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux