[PATCH] io_uring: fix race with shadow drain deferrals

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



When we go and queue requests with drain, we check if we need to defer
based on sequence. This is done safely under the lock, but then we drop
the lock before actually inserting the shadow. If the original request
is found on the deferred list by another completion in the mean time,
it could have been started AND completed by the time we insert the
shadow, which will stall the queue.

After re-grabbing the completion lock, check if the original request is
still in the deferred list. If it isn't, then we know that someone else
already found and issued it. If that happened, then our job is done, we
can simply free the shadow.

Cc: Jackie Liu <liuyun01@xxxxxxxxxx>
Fixes: 4fe2c963154c ("io_uring: add support for link with drain")
Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>

---

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 6175e2e195c0..6fb25ae53817 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2973,6 +2973,7 @@ static void io_queue_link_head(struct io_kiocb *req, struct io_kiocb *shadow)
 	int ret;
 	int need_submit = false;
 	struct io_ring_ctx *ctx = req->ctx;
+	struct io_kiocb *tmp;
 
 	if (unlikely(req->flags & REQ_F_FAIL_LINK)) {
 		ret = -ECANCELED;
@@ -3011,8 +3012,30 @@ static void io_queue_link_head(struct io_kiocb *req, struct io_kiocb *shadow)
 
 	/* Insert shadow req to defer_list, blocking next IOs */
 	spin_lock_irq(&ctx->completion_lock);
-	trace_io_uring_defer(ctx, shadow, true);
-	list_add_tail(&shadow->list, &ctx->defer_list);
+	if (ret) {
+		/*
+		 * We dropped the lock since deciding we needed to defer this
+		 * request. We must re-check under the lock, if it's now gone
+		 * from the list, that means that another completion came in
+		 * and submitted it since we decided we needed to defer. If
+		 * that's the case, simply drop the shadow, there's nothing
+		 * more we need to do here.
+		 */
+		list_for_each_entry(tmp, &ctx->defer_list, list) {
+			if (tmp == req)
+				break;
+		}
+		if (tmp != req) {
+			__io_free_req(shadow);
+			shadow = NULL;
+		}
+	}
+	if (shadow) {
+		trace_io_uring_defer(ctx, shadow, true);
+		list_add_tail(&shadow->list, &ctx->defer_list);
+	} else {
+		need_submit = false;
+	}
 	spin_unlock_irq(&ctx->completion_lock);
 
 	if (need_submit)

-- 
Jens Axboe




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux