On 1/14/23 9:15 AM, Jens Axboe wrote: > On 1/14/23 2:51 AM, gregkh@xxxxxxxxxxxxxxxxxxx wrote: >> >> The patch below does not apply to the 5.15-stable tree. >> If someone wants it applied there, or to any other stable or longterm >> tree, then please email the backport, including the original git commit >> id to <stable@xxxxxxxxxxxxxxx>. > > This has to be done a bit differently, but this one should work. I tested > it on 5.10-stable, but should apply to 5.15-stable as well as they are > the same base now. Oops, missed accounting for overflow. Please use this one instead! Sorry about that. -- Jens Axboe
From 929e5c54ff2ad116e9dcee967fc5df1dbb6b1d90 Mon Sep 17 00:00:00 2001 From: Pavel Begunkov <asml.silence@xxxxxxxxx> Date: Sat, 14 Jan 2023 09:14:03 -0700 Subject: [PATCH] io_uring: lock overflowing for IOPOLL MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 544d163d659d45a206d8929370d5a2984e546cb7 upstream. syzbot reports an issue with overflow filling for IOPOLL: WARNING: CPU: 0 PID: 28 at io_uring/io_uring.c:734 io_cqring_event_overflow+0x1c0/0x230 io_uring/io_uring.c:734 CPU: 0 PID: 28 Comm: kworker/u4:1 Not tainted 6.2.0-rc3-syzkaller-16369-g358a161a6a9e #0 Workqueue: events_unbound io_ring_exit_work Call trace: io_cqring_event_overflow+0x1c0/0x230 io_uring/io_uring.c:734 io_req_cqe_overflow+0x5c/0x70 io_uring/io_uring.c:773 io_fill_cqe_req io_uring/io_uring.h:168 [inline] io_do_iopoll+0x474/0x62c io_uring/rw.c:1065 io_iopoll_try_reap_events+0x6c/0x108 io_uring/io_uring.c:1513 io_uring_try_cancel_requests+0x13c/0x258 io_uring/io_uring.c:3056 io_ring_exit_work+0xec/0x390 io_uring/io_uring.c:2869 process_one_work+0x2d8/0x504 kernel/workqueue.c:2289 worker_thread+0x340/0x610 kernel/workqueue.c:2436 kthread+0x12c/0x158 kernel/kthread.c:376 ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:863 There is no real problem for normal IOPOLL as flush is also called with uring_lock taken, but it's getting more complicated for IOPOLL|SQPOLL, for which __io_cqring_overflow_flush() happens from the CQ waiting path. Reported-and-tested-by: syzbot+6805087452d72929404e@xxxxxxxxxxxxxxxxxxxxxxxxx Cc: stable@xxxxxxxxxxxxxxx # 5.10+ Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx> Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> --- io_uring/io_uring.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 0c4d16afb9ef..c3838a2c5c6f 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -2478,12 +2478,26 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events, io_init_req_batch(&rb); while (!list_empty(done)) { + struct io_uring_cqe *cqe; + unsigned cflags; + req = list_first_entry(done, struct io_kiocb, inflight_entry); list_del(&req->inflight_entry); - - io_fill_cqe_req(req, req->result, io_put_rw_kbuf(req)); + cflags = io_put_rw_kbuf(req); (*nr_events)++; + cqe = io_get_cqe(ctx); + if (unlikely(!cqe)) { + spin_lock(&ctx->completion_lock); + io_cqring_event_overflow(ctx, req->user_data, + req->result, cflags); + spin_unlock(&ctx->completion_lock); + continue; + } + WRITE_ONCE(cqe->user_data, req->user_data); + WRITE_ONCE(cqe->res, req->result); + WRITE_ONCE(cqe->flags, cflags); + if (req_ref_put_and_test(req)) io_req_free_batch(&rb, req, &ctx->submit_state); } -- 2.39.0