On 11/20/24 22:14, David Wei wrote:
Instead of eagerly running all available local tw, limit the amount of
local tw done to the max of IO_LOCAL_TW_DEFAULT_MAX (20) or wait_nr. The
value of 20 is chosen as a reasonable heuristic to allow enough work
batching but also keep latency down.
Add a retry_llist that maintains a list of local tw that couldn't be
done in time. No synchronisation is needed since it is only modified
within the task context.
Signed-off-by: David Wei <dw@xxxxxxxxxxx>
---
include/linux/io_uring_types.h | 1 +
io_uring/io_uring.c | 43 +++++++++++++++++++++++++---------
io_uring/io_uring.h | 2 +-
3 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 593c10a02144..011860ade268 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -336,6 +336,7 @@ struct io_ring_ctx {
*/
struct {
struct llist_head work_llist;
+ struct llist_head retry_llist;
Fwiw, probably doesn't matter, but it doesn't even need
to be atomic, it's queued and spliced while holding
->uring_lock, the pending check is also synchronised as
there is only one possible task doing that.
unsigned long check_cq;
atomic_t cq_wait_nr;
atomic_t cq_timeouts;
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 83bf041d2648..c3a7d0197636 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -121,6 +121,7 @@
...
static int __io_run_local_work(struct io_ring_ctx *ctx, struct io_tw_state *ts,
int min_events)
{
struct llist_node *node;
unsigned int loops = 0;
- int ret = 0;
+ int ret, limit;
if (WARN_ON_ONCE(ctx->submitter_task != current))
return -EEXIST;
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
+ limit = max(IO_LOCAL_TW_DEFAULT_MAX, min_events);
again:
+ ret = __io_run_local_work_loop(&ctx->retry_llist.first, ts, limit);
+ if (ctx->retry_llist.first)
+ goto retry_done;
+
/*
* llists are in reverse order, flip it back the right way before
* running the pending items.
*/
node = llist_reverse_order(llist_del_all(&ctx->work_llist));
- while (node) {
- struct llist_node *next = node->next;
- struct io_kiocb *req = container_of(node, struct io_kiocb,
- io_task_work.node);
- INDIRECT_CALL_2(req->io_task_work.func,
- io_poll_task_func, io_req_rw_complete,
- req, ts);
- ret++;
- node = next;
- }
+ ret = __io_run_local_work_loop(&node, ts, ret);
One thing that is not so nice is that now we have this handling and
checks in the hot path, and __io_run_local_work_loop() most likely
gets uninlined.
I wonder, can we just requeue it via task_work again? We can even
add a variant efficiently adding a list instead of a single entry,
i.e. local_task_work_add(head, tail, ...);
I'm also curious what's the use case you've got that is hitting
the problem?
--
Pavel Begunkov