[PATCH for-next] io_uring: fix io_free_batch_list races

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[  158.514382] WARNING: CPU: 5 PID: 15251 at fs/io_uring.c:1141 io_free_batch_list+0x269/0x360
[  158.514426] RIP: 0010:io_free_batch_list+0x269/0x360
[  158.514437] Call Trace:
[  158.514440]  __io_submit_flush_completions+0xde/0x180
[  158.514444]  tctx_task_work+0x14a/0x220
[  158.514447]  task_work_run+0x64/0xa0
[  158.514448]  __do_sys_io_uring_enter+0x7c/0x970
[  158.514450]  __x64_sys_io_uring_enter+0x22/0x30
[  158.514451]  do_syscall_64+0x43/0x90
[  158.514453]  entry_SYSCALL_64_after_hwframe+0x44/0xae

We should not touch request internals including req->comp_list.next
after putting our ref if it's not final, e.g. we can start freeing
requests from the free cache.

Fixed: 62ca9cb93e7f8 ("io_uring: optimise io_free_batch_list()")
Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx>
---
 fs/io_uring.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 73135c5c6168..55435464cee0 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2266,9 +2266,10 @@ static void io_free_batch_list(struct io_ring_ctx *ctx,
 		struct io_kiocb *req = container_of(node, struct io_kiocb,
 						    comp_list);
 
-		if (!req_ref_put_and_test(req)) {
+		if (unlikely(req->flags & REQ_F_REFCOUNT)) {
 			node = req->comp_list.next;
-			continue;
+			if (!req_ref_put_and_test(req))
+				continue;
 		}
 
 		io_queue_next(req);
-- 
2.33.0




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux