Patch "io_uring: always prep_async for drain requests" has been added to the 6.1-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    io_uring: always prep_async for drain requests

to the 6.1-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     io_uring-always-prep_async-for-drain-requests.patch
and it can be found in the queue-6.1 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 8a5cdc76006684732443d8849efa6589f58d07b3
Author: Dylan Yudaken <dylany@xxxxxxxx>
Date:   Fri Jan 27 02:59:11 2023 -0800

    io_uring: always prep_async for drain requests
    
    [ Upstream commit ef5c600adb1d985513d2b612cc90403a148ff287 ]
    
    Drain requests all go through io_drain_req, which has a quick exit in case
    there is nothing pending (ie the drain is not useful). In that case it can
    run the issue the request immediately.
    
    However for safety it queues it through task work.
    The problem is that in this case the request is run asynchronously, but
    the async work has not been prepared through io_req_prep_async.
    
    This has not been a problem up to now, as the task work always would run
    before returning to userspace, and so the user would not have a chance to
    race with it.
    
    However - with IORING_SETUP_DEFER_TASKRUN - this is no longer the case and
    the work might be defered, giving userspace a chance to change data being
    referred to in the request.
    
    Instead _always_ prep_async for drain requests, which is simpler anyway
    and removes this issue.
    
    Cc: stable@xxxxxxxxxxxxxxx
    Fixes: c0e0d6ba25f1 ("io_uring: add IORING_SETUP_DEFER_TASKRUN")
    Signed-off-by: Dylan Yudaken <dylany@xxxxxxxx>
    Link: https://lore.kernel.org/r/20230127105911.2420061-1-dylany@xxxxxxxx
    Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 13a60f51b283..862e05e6691d 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1634,17 +1634,12 @@ static __cold void io_drain_req(struct io_kiocb *req)
 	}
 	spin_unlock(&ctx->completion_lock);
 
-	ret = io_req_prep_async(req);
-	if (ret) {
-fail:
-		io_req_complete_failed(req, ret);
-		return;
-	}
 	io_prep_async_link(req);
 	de = kmalloc(sizeof(*de), GFP_KERNEL);
 	if (!de) {
 		ret = -ENOMEM;
-		goto fail;
+		io_req_complete_failed(req, ret);
+		return;
 	}
 
 	spin_lock(&ctx->completion_lock);
@@ -1918,13 +1913,16 @@ static void io_queue_sqe_fallback(struct io_kiocb *req)
 		req->flags &= ~REQ_F_HARDLINK;
 		req->flags |= REQ_F_LINK;
 		io_req_complete_failed(req, req->cqe.res);
-	} else if (unlikely(req->ctx->drain_active)) {
-		io_drain_req(req);
 	} else {
 		int ret = io_req_prep_async(req);
 
-		if (unlikely(ret))
+		if (unlikely(ret)) {
 			io_req_complete_failed(req, ret);
+			return;
+		}
+
+		if (unlikely(req->ctx->drain_active))
+			io_drain_req(req);
 		else
 			io_queue_iowq(req, NULL);
 	}



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux