Re: [PATCH 2/2] nvme: use blk-mq polling for uring commands

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 3/25/23 00:28, Keith Busch wrote:
From: Keith Busch <kbusch@xxxxxxxxxx>

The first advantage is that unshared and multipath namespaces can use
the same polling callback.

The other advantage is that we don't need a bio payload in order to
poll, allowing commands like 'flush' and 'write zeroes' to be submitted
on the same high priority queue as read and write commands.

This can also allow for a future optimization for the driver since we no
longer need to create special hidden block devices to back nvme-generic
char dev's with unsupported command sets.

Signed-off-by: Keith Busch <kbusch@xxxxxxxxxx>
---
  drivers/nvme/host/ioctl.c     | 79 ++++++++++++-----------------------
  drivers/nvme/host/multipath.c |  2 +-
  drivers/nvme/host/nvme.h      |  2 -
  3 files changed, 28 insertions(+), 55 deletions(-)

diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index 723e7d5b778f2..369e8519b87a2 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -503,7 +503,6 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io(struct request *req,
  {
  	struct io_uring_cmd *ioucmd = req->end_io_data;
  	struct nvme_uring_cmd_pdu *pdu = nvme_uring_cmd_pdu(ioucmd);
-	void *cookie = READ_ONCE(ioucmd->cookie);
req->bio = pdu->bio;
  	if (nvme_req(req)->flags & NVME_REQ_CANCELLED)
@@ -516,9 +515,10 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io(struct request *req,
  	 * For iopoll, complete it directly.
  	 * Otherwise, move the completion to task work.
  	 */
-	if (cookie != NULL && blk_rq_is_poll(req))
+	if (blk_rq_is_poll(req)) {
+		WRITE_ONCE(ioucmd->cookie, NULL);
  		nvme_uring_task_cb(ioucmd);
-	else
+	} else
  		io_uring_cmd_complete_in_task(ioucmd, nvme_uring_task_cb);
return RQ_END_IO_FREE;
@@ -529,7 +529,6 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io_meta(struct request *req,
  {
  	struct io_uring_cmd *ioucmd = req->end_io_data;
  	struct nvme_uring_cmd_pdu *pdu = nvme_uring_cmd_pdu(ioucmd);
-	void *cookie = READ_ONCE(ioucmd->cookie);
req->bio = pdu->bio;
  	pdu->req = req;
@@ -538,9 +537,10 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io_meta(struct request *req,
  	 * For iopoll, complete it directly.
  	 * Otherwise, move the completion to task work.
  	 */
-	if (cookie != NULL && blk_rq_is_poll(req))
+	if (blk_rq_is_poll(req)) {
+		WRITE_ONCE(ioucmd->cookie, NULL);
  		nvme_uring_task_meta_cb(ioucmd);
-	else
+	} else
  		io_uring_cmd_complete_in_task(ioucmd, nvme_uring_task_meta_cb);
return RQ_END_IO_NONE;
@@ -597,7 +597,6 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
  	if (issue_flags & IO_URING_F_IOPOLL)
  		rq_flags |= REQ_POLLED;
-retry:
  	req = nvme_alloc_user_request(q, &c, rq_flags, blk_flags);
  	if (IS_ERR(req))
  		return PTR_ERR(req);
@@ -611,17 +610,9 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
  			return ret;
  	}
- if (issue_flags & IO_URING_F_IOPOLL && rq_flags & REQ_POLLED) {
-		if (unlikely(!req->bio)) {
-			/* we can't poll this, so alloc regular req instead */
-			blk_mq_free_request(req);
-			rq_flags &= ~REQ_POLLED;
-			goto retry;
-		} else {
-			WRITE_ONCE(ioucmd->cookie, req->bio);
-			req->bio->bi_opf |= REQ_POLLED;
-		}
-	}
+	if (blk_rq_is_poll(req))
+		WRITE_ONCE(ioucmd->cookie, req);

Why aren't we always setting the cookie to point at the req?



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux