We have seen workloads which suffer due to the way task work is currently scheduled. This scheduling can cause non-trivial tasks to run interrupting useful work on the workload. For example in network servers, a large async recv may run, calling memcpy on a large packet, interrupting a send. Which would add latency. This series adds an option to defer async work until user space calls io_uring_enter with the GETEVENTS flag. This allows the workload to choose when to schedule async work and have finer control (at the expense of complexity of managing this) of scheduling. Patches 1,2 are prep patches Patch 3 changes io_uring_enter to not pre-run task work Patch 4/5/6 adds the new flag and functionality Patch 7 adds tracing for the local task work running Changes since v2: - add a patch to trace local task work run - return -EEXIST if calling from the wrong task - properly handle shutting down due to an exec - remove 'all' parameter from io_run_task_work_ctx Changes since v1: - Removed the first patch (using ctx variable) which was broken - Require IORING_SETUP_SINGLE_ISSUER and make sure waiter task is the same as the submitter task - Just don't run task work at the start of io_uring_enter (Pavel's suggestion) - Remove io_move_task_work_from_local - Fix locking bugs Dylan Yudaken (7): io_uring: remove unnecessary variable io_uring: introduce io_has_work io_uring: do not run task work at the start of io_uring_enter io_uring: add IORING_SETUP_DEFER_TASKRUN io_uring: move io_eventfd_put io_uring: signal registered eventfd to process deferred task work io_uring: trace local task work run include/linux/io_uring_types.h | 3 + include/trace/events/io_uring.h | 29 ++++ include/uapi/linux/io_uring.h | 7 + io_uring/cancel.c | 2 +- io_uring/io_uring.c | 264 ++++++++++++++++++++++++++------ io_uring/io_uring.h | 29 +++- io_uring/rsrc.c | 2 +- 7 files changed, 285 insertions(+), 51 deletions(-) base-commit: 5993000dc6b31b927403cee65fbc5f9f070fa3e4 prerequisite-patch-id: cb1d024945aa728d09a131156140a33d30bc268b -- 2.30.2