Re: [PATCH] io_uring: don't issue reqs in iopoll mode when ctx is dying

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24/02/2021 09:59, Pavel Begunkov wrote:
> On 24/02/2021 09:46, Pavel Begunkov wrote:
>>> Are you sure? We just don't want to reissue it, we need to fail it.
>>> Hence if we catch it at reissue time, that should be enough. But I'm
>>> open to clue batting :-)
>>
>> Jens, IOPOLL can happen from a different task, so
>> 1) we don't want to grab io_wq_work context from it. As always we can pass it
>> through task_work, or should be solved with your io-wq patches.
>>
>> 2) it happens who knows when in time, so iovec may be gone already -- same
>> reasoning why io_[read,write]() copy it before going to io-wq.
> 
> The diff below should solve the second problem by failing them (not tested).
> 1) can be left to the io-wq patches.

We can even try to init it in io_complete_rw_iopoll() similarly to
__io_complete_rw(). The tricky part for me is that "!inline exec" comment,
i.e. to distinct io_complete_rw_iopoll() -EAGAIN'ed inline and from IRQ/etc.
Jens?


diff --git a/fs/io_uring.c b/fs/io_uring.c
index bf9ad810c621..413bb4dd0a2f 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2610,8 +2610,11 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
 		list_del(&req->inflight_entry);
 
 		if (READ_ONCE(req->result) == -EAGAIN) {
+			bool reissue = req->async_data ||
+				!io_op_defs[req->opcode].needs_async_data;
+
 			req->iopoll_completed = 0;
-			if (io_rw_reissue(req))
+			if (reissue && io_rw_reissue(req))
 				continue;
 		}
 
@@ -2829,7 +2832,7 @@ static bool io_resubmit_prep(struct io_kiocb *req)
 }
 #endif
 
-static bool io_rw_reissue(struct io_kiocb *req)
+static bool io_rw_reissue_prep(struct io_kiocb *req)
 {
 #ifdef CONFIG_BLOCK
 	umode_t mode = file_inode(req->file)->i_mode;
@@ -2844,13 +2847,21 @@ static bool io_rw_reissue(struct io_kiocb *req)
 
 	ret = io_sq_thread_acquire_mm_files(req->ctx, req);
 
-	if (!ret && io_resubmit_prep(req)) {
+	if (!ret && io_resubmit_prep(req))
+		return true;
+	req_set_fail_links(req);
+#endif
+	return false;
+
+}
+
+static bool io_rw_reissue(struct io_kiocb *req)
+{
+	if (io_rw_reissue_prep(req)) {
 		refcount_inc(&req->refs);
 		io_queue_async_work(req);
 		return true;
 	}
-	req_set_fail_links(req);
-#endif
 	return false;
 }
 
@@ -2885,8 +2896,12 @@ static void io_complete_rw_iopoll(struct kiocb *kiocb, long res, long res2)
 	if (kiocb->ki_flags & IOCB_WRITE)
 		kiocb_end_write(req);
 
-	if (res != -EAGAIN && res != req->result)
+	if (req == -EAGAIN) {
+		if (/* !inline exec || */ !io_rw_reissue_prep(req))
+			req_set_fail_links(req);
+	} else if (res != req->result) {
 		req_set_fail_links(req);
+	}
 
 	WRITE_ONCE(req->result, res);
 	/* order with io_poll_complete() checking ->result */



-- 
Pavel Begunkov



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux