On 1/29/23 6:26 AM, gregkh@xxxxxxxxxxxxxxxxxxx wrote: > > The patch below does not apply to the 6.1-stable tree. > If someone wants it applied there, or to any other stable or longterm > tree, then please email the backport, including the original git commit > id to <stable@xxxxxxxxxxxxxxx>. This should do it. -- Jens Axboe
From 71a58ab8cf1bb4b4c286fbabe266a82bab20fdf2 Mon Sep 17 00:00:00 2001 From: Dylan Yudaken <dylany@xxxxxxxx> Date: Sun, 29 Jan 2023 12:34:51 -0700 Subject: [PATCH] io_uring: always prep_async for drain requests commit ef5c600adb1d985513d2b612cc90403a148ff287 upstream. Drain requests all go through io_drain_req, which has a quick exit in case there is nothing pending (ie the drain is not useful). In that case it can run the issue the request immediately. However for safety it queues it through task work. The problem is that in this case the request is run asynchronously, but the async work has not been prepared through io_req_prep_async. This has not been a problem up to now, as the task work always would run before returning to userspace, and so the user would not have a chance to race with it. However - with IORING_SETUP_DEFER_TASKRUN - this is no longer the case and the work might be defered, giving userspace a chance to change data being referred to in the request. Instead _always_ prep_async for drain requests, which is simpler anyway and removes this issue. Cc: stable@xxxxxxxxxxxxxxx Fixes: c0e0d6ba25f1 ("io_uring: add IORING_SETUP_DEFER_TASKRUN") Signed-off-by: Dylan Yudaken <dylany@xxxxxxxx> Link: https://lore.kernel.org/r/20230127105911.2420061-1-dylany@xxxxxxxx Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> --- io_uring/io_uring.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index cea5de98c423..6fc4aaef5fe2 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1658,17 +1658,12 @@ static __cold void io_drain_req(struct io_kiocb *req) } spin_unlock(&ctx->completion_lock); - ret = io_req_prep_async(req); - if (ret) { -fail: - io_req_complete_failed(req, ret); - return; - } io_prep_async_link(req); de = kmalloc(sizeof(*de), GFP_KERNEL); if (!de) { ret = -ENOMEM; - goto fail; + io_req_complete_failed(req, ret); + return; } spin_lock(&ctx->completion_lock); @@ -1942,13 +1937,16 @@ static void io_queue_sqe_fallback(struct io_kiocb *req) req->flags &= ~REQ_F_HARDLINK; req->flags |= REQ_F_LINK; io_req_complete_failed(req, req->cqe.res); - } else if (unlikely(req->ctx->drain_active)) { - io_drain_req(req); } else { int ret = io_req_prep_async(req); - if (unlikely(ret)) + if (unlikely(ret)) { io_req_complete_failed(req, ret); + return; + } + + if (unlikely(req->ctx->drain_active)) + io_drain_req(req); else io_queue_iowq(req, NULL); } -- 2.39.0