[PATCH 3/8] io_uring/kbuf: move locking into io_kbuf_drop()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Move the burden of locking out of the caller into io_kbuf_drop(), that
will help with furher refactoring.

Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx>
---
 io_uring/io_uring.c | 5 +----
 io_uring/kbuf.h     | 4 ++--
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 6fa1e88e40fbe..ed7c9081352a4 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -398,11 +398,8 @@ static bool req_need_defer(struct io_kiocb *req, u32 seq)
 
 static void io_clean_op(struct io_kiocb *req)
 {
-	if (req->flags & REQ_F_BUFFER_SELECTED) {
-		spin_lock(&req->ctx->completion_lock);
+	if (unlikely(req->flags & REQ_F_BUFFER_SELECTED))
 		io_kbuf_drop(req);
-		spin_unlock(&req->ctx->completion_lock);
-	}
 
 	if (req->flags & REQ_F_NEED_CLEANUP) {
 		const struct io_cold_def *def = &io_cold_defs[req->opcode];
diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h
index bd80c44c5af1e..310f94a0727a6 100644
--- a/io_uring/kbuf.h
+++ b/io_uring/kbuf.h
@@ -174,13 +174,13 @@ static inline void __io_put_kbuf_list(struct io_kiocb *req, int len,
 
 static inline void io_kbuf_drop(struct io_kiocb *req)
 {
-	lockdep_assert_held(&req->ctx->completion_lock);
-
 	if (!(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)))
 		return;
 
+	spin_lock(&req->ctx->completion_lock);
 	/* len == 0 is fine here, non-ring will always drop all of it */
 	__io_put_kbuf_list(req, 0, &req->ctx->io_buffers_comp);
+	spin_unlock(&req->ctx->completion_lock);
 }
 
 static inline unsigned int __io_put_kbufs(struct io_kiocb *req, int len,
-- 
2.47.1





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux