[RFC 04/11] io_uring: don't take ctx refs in tctx_task_work()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Now we ban any new req-task_works to be added after we start the task
cancellation. Because tctx is removed from ctx lists only during
task cancellation, and considering that it's removed from the task
context, we'll have current accounted in all rings tctx_task_work() is
working with and so they will stay alive at least awhile it's running.
Don't takes extra ctx refs.

Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx>
---
 fs/io_uring.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index ec5fe55ab265..8d5aff1ecb4c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2475,7 +2475,6 @@ static void ctx_flush_and_put(struct io_ring_ctx *ctx, bool *locked)
 		mutex_unlock(&ctx->uring_lock);
 		*locked = false;
 	}
-	percpu_ref_put(&ctx->refs);
 }
 
 static inline void ctx_commit_and_unlock(struct io_ring_ctx *ctx)
@@ -2506,7 +2505,6 @@ static void handle_prev_tw_list(struct io_wq_work_node *node,
 			*ctx = req->ctx;
 			/* if not contended, grab and improve batching */
 			*uring_locked = mutex_trylock(&(*ctx)->uring_lock);
-			percpu_ref_get(&(*ctx)->refs);
 			if (unlikely(!*uring_locked))
 				spin_lock(&(*ctx)->completion_lock);
 		}
@@ -2537,7 +2535,6 @@ static void handle_tw_list(struct io_wq_work_node *node,
 			*ctx = req->ctx;
 			/* if not contended, grab and improve batching */
 			*locked = mutex_trylock(&(*ctx)->uring_lock);
-			percpu_ref_get(&(*ctx)->refs);
 		}
 		req->io_task_work.func(req, locked);
 		node = next;
-- 
2.36.0




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux