[PATCH V2] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



io_wq_put_and_exit() is called from do_exit(), but all FIXED_FILE requests
in io_wq aren't canceled in io_uring_cancel_generic() called from do_exit().
Meantime io_wq IO code path may share resource with normal iopoll code
path.

So if any HIPRI request is submittd via io_wq, this request may not get resouce
for moving on, given iopoll isn't possible in io_wq_put_and_exit().

The issue can be triggered when terminating 't/io_uring -n4 /dev/nullb0'
with default null_blk parameters.

Fix it by always cancelling all requests in io_wq by adding helper of
io_uring_cancel_wq(), and this way is reasonable because io_wq destroying
follows canceling requests immediately.

Closes: https://lore.kernel.org/linux-block/3893581.1691785261@xxxxxxxxxxxxxxxxxxxxxx/
Reported-by: David Howells <dhowells@xxxxxxxxxx>
Cc: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>,
Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx>
---
V2:
	- avoid to mess up io_uring_cancel_generic() by adding one new
	  helper for canceling io_wq requests

 io_uring/io_uring.c | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index f4591b912ea8..7b3518f96c3b 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -3298,6 +3298,37 @@ static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked)
 	return percpu_counter_sum(&tctx->inflight);
 }
 
+static void io_uring_cancel_wq(struct io_uring_task *tctx)
+{
+	int ret;
+
+	if (!tctx->io_wq)
+		return;
+
+	/*
+	 * FIXED_FILE request isn't tracked in do_exit(), and these
+	 * requests may be submitted to our io_wq as iopoll, so have to
+	 * cancel them before destroying io_wq for avoiding IO hang
+	 */
+	do {
+		struct io_tctx_node *node;
+		unsigned long index;
+
+		ret = 0;
+		xa_for_each(&tctx->xa, index, node) {
+			struct io_ring_ctx *ctx = node->ctx;
+			struct io_task_cancel cancel = { .task = current, .all = true, };
+			enum io_wq_cancel cret;
+
+			io_iopoll_try_reap_events(ctx);
+			cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_task_cb,
+				       &cancel, true);
+			ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
+			cond_resched();
+		}
+	} while (ret);
+}
+
 /*
  * Find any io_uring ctx that this task has registered or done IO on, and cancel
  * requests. @sqd should be not-null IFF it's an SQPOLL thread cancellation.
@@ -3369,6 +3400,7 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
 		finish_wait(&tctx->wait, &wait);
 	} while (1);
 
+	io_uring_cancel_wq(tctx);
 	io_uring_clean_tctx(tctx);
 	if (cancel_all) {
 		/*
-- 
2.41.0




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux