[PATCH 3/5] io_uring: protect rsrc dealloc by uring_lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



As with ->async_data, also protect all rsrc deallocation by uring_lock,
so when we do refcount avoidance they're not removed unexpectedly awhile
someone is still accessing them.

Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx>
---
 fs/io_uring.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 8ca9895535dd..cd3a1058f657 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -7664,25 +7664,24 @@ static void __io_rsrc_put_work(struct io_rsrc_node *ref_node)
 	struct io_ring_ctx *ctx = rsrc_data->ctx;
 	struct io_rsrc_put *prsrc, *tmp;
 
+	/* rsrc deallocation must be protected by ->uring_lock */
+	mutex_lock(&ctx->uring_lock);
 	list_for_each_entry_safe(prsrc, tmp, &ref_node->rsrc_list, list) {
 		list_del(&prsrc->list);
 
 		if (prsrc->tag) {
-			bool lock_ring = ctx->flags & IORING_SETUP_IOPOLL;
-
-			io_ring_submit_lock(ctx, lock_ring);
 			spin_lock_irq(&ctx->completion_lock);
 			io_cqring_fill_event(ctx, prsrc->tag, 0, 0);
 			ctx->cq_extra++;
 			io_commit_cqring(ctx);
 			spin_unlock_irq(&ctx->completion_lock);
 			io_cqring_ev_posted(ctx);
-			io_ring_submit_unlock(ctx, lock_ring);
 		}
 
 		rsrc_data->do_put(ctx, prsrc);
 		kfree(prsrc);
 	}
+	mutex_unlock(&ctx->uring_lock);
 
 	io_rsrc_node_destroy(ref_node);
 	if (atomic_dec_and_test(&rsrc_data->refs))
-- 
2.32.0




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux