Pin ring in io_fallback_req_func() by briefly elevating ctx->refs in case any task_work handler touches ctx after releasing a request. Fixes: 9011bf9a13e3b ("io_uring: fix stuck fallback reqs") Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx> --- p.s. I had been considering back then but don't remember why it was dropped. Just be extra careful to not regress in 5.14 fs/io_uring.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/fs/io_uring.c b/fs/io_uring.c index 6a092a534d2b..979941bcd15a 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -2477,8 +2477,10 @@ static void io_fallback_req_func(struct work_struct *work) struct llist_node *node = llist_del_all(&ctx->fallback_llist); struct io_kiocb *req, *tmp; + percpu_ref_get(&ctx->refs); llist_for_each_entry_safe(req, tmp, node, io_task_work.fallback_node) req->io_task_work.func(req); + percpu_ref_put(&ctx->refs); } static void __io_complete_rw(struct io_kiocb *req, long res, long res2, -- 2.32.0