This is a note to let you know that I've just added the patch titled io_uring/poll: add requeue return code from poll multishot handling to the 6.7-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: io_uring-poll-add-requeue-return-code-from-poll-multishot-handling.patch and it can be found in the queue-6.7 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 1a45c2b50f7f10e84a4ee57b0b0556d76f995866 Mon Sep 17 00:00:00 2001 From: Jens Axboe <axboe@xxxxxxxxx> Date: Mon, 29 Jan 2024 11:57:11 -0700 Subject: io_uring/poll: add requeue return code from poll multishot handling From: Jens Axboe <axboe@xxxxxxxxx> Commit 704ea888d646cb9d715662944cf389c823252ee0 upstream. Since our poll handling is edge triggered, multishot handlers retry internally until they know that no more data is available. In preparation for limiting these retries, add an internal return code, IOU_REQUEUE, which can be used to inform the poll backend about the handler wanting to retry, but that this should happen through a normal task_work requeue rather than keep hammering on the issue side for this one request. No functional changes in this patch, nobody is using this return code just yet. Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- io_uring/io_uring.h | 7 +++++++ io_uring/poll.c | 9 ++++++++- 2 files changed, 15 insertions(+), 1 deletion(-) --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -31,6 +31,13 @@ enum { IOU_ISSUE_SKIP_COMPLETE = -EIOCBQUEUED, /* + * Requeue the task_work to restart operations on this request. The + * actual value isn't important, should just be not an otherwise + * valid error code, yet less than -MAX_ERRNO and valid internally. + */ + IOU_REQUEUE = -3072, + + /* * Intended only when both IO_URING_F_MULTISHOT is passed * to indicate to the poll runner that multishot should be * removed and the result is set on req->cqe.res. --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -226,6 +226,7 @@ enum { IOU_POLL_NO_ACTION = 1, IOU_POLL_REMOVE_POLL_USE_RES = 2, IOU_POLL_REISSUE = 3, + IOU_POLL_REQUEUE = 4, }; static void __io_poll_execute(struct io_kiocb *req, int mask) @@ -329,6 +330,8 @@ static int io_poll_check_events(struct i int ret = io_poll_issue(req, ts); if (ret == IOU_STOP_MULTISHOT) return IOU_POLL_REMOVE_POLL_USE_RES; + else if (ret == IOU_REQUEUE) + return IOU_POLL_REQUEUE; if (ret < 0) return ret; } @@ -351,8 +354,12 @@ void io_poll_task_func(struct io_kiocb * int ret; ret = io_poll_check_events(req, ts); - if (ret == IOU_POLL_NO_ACTION) + if (ret == IOU_POLL_NO_ACTION) { return; + } else if (ret == IOU_POLL_REQUEUE) { + __io_poll_execute(req, 0); + return; + } io_poll_remove_entries(req); io_poll_tw_hash_eject(req, ts); Patches currently in stable-queue which might be from axboe@xxxxxxxxx are queue-6.7/io_uring-poll-add-requeue-return-code-from-poll-multishot-handling.patch queue-6.7/io_uring-net-limit-inline-multishot-retries.patch queue-6.7/io_uring-rw-ensure-poll-based-multishot-read-retries-appropriately.patch queue-6.7/blk-iocost-fix-an-ubsan-shift-out-of-bounds-warning.patch queue-6.7/io_uring-net-fix-sr-len-for-ioring_op_recv-with-msg_waitall-and-buffers.patch queue-6.7/io_uring-net-un-indent-mshot-retry-path-in-io_recv_finish.patch queue-6.7/io_uring-poll-move-poll-execution-helpers-higher-up.patch