1-13 are cleanups after splitting io_uring into files. Patch 14 from Hao should remove some overhead from poll requests Patch 15 from Hao adds per-bucket spinlocks, and 16-19 do a little bit of cleanup. The downside of per-bucket spinlocks is that it adds additional spinlock/unlock pair in the poll request completion side, which shouldn't matter much with 20/25. Patch 20 uses inline completion infra for poll requests, this nicely improves perf when there is a good tw batching. Patch 21 implements the userspace visible side of IORING_SETUP_SINGLE_ISSUER, it'll be used for poll requests and later for spinlock optimisations. 22-25 introduces ->uring_lock protected cancellation hashing. It requires us to grab ->uring_lock in the completion side, but saves two spin lock/unlock pairs. We apply it automatically in cases the mutex is already likely to be held (see 25/25 description), so there is no additional mutex overhead and potential latency problemes. Numbers: I used a simple poll benchmark temporarily stored at [1], which each iteration queues a batch of 32 POLLIN poll requests and triggers all of them with read (+write). baseline (patches 1-18): 11720 K req/s base + 19 (+ inline completion infra) 12419 K req/s, ~+6% base + 19-25 (+ uring_lock hashing): 12804 K req/s, +9.2% from the baseline, or +3.2% relative to patch 19. [1] https://github.com/isilence/liburing/tree/poll-bench Hao Xu (2): io_uring: poll: remove unnecessary req->ref set io_uring: switch cancel_hash to use per entry spinlock Pavel Begunkov (23): io_uring: make reg buf init consistent io_uring: move defer_list to slow data io_uring: better caching for ctx timeout fields io_uring: refactor ctx slow data placement io_uring: move cancel_seq out of io-wq io_uring: move small helpers to headers io_uring: inline ->registered_rings io_uring: don't set REQ_F_COMPLETE_INLINE in tw io_uring: never defer-complete multi-apoll io_uring: kill REQ_F_COMPLETE_INLINE io_uring: refactor io_req_task_complete() io_uring: don't inline io_put_kbuf io_uring: remove check_cq checking from hot paths io_uring: pass poll_find lock back io_uring: clean up io_try_cancel io_uring: limit number hash buckets io_uring: clean up io_ring_ctx_alloc io_uring: use state completion infra for poll reqs io_uring: add IORING_SETUP_SINGLE_ISSUER io_uring: pass hash table into poll_find io_uring: introduce a struct for hash table io_uring: propagate locking state to poll cancel io_uring: mutex locked poll hashing include/uapi/linux/io_uring.h | 5 +- io_uring/cancel.c | 27 ++-- io_uring/cancel.h | 4 +- io_uring/fdinfo.c | 11 +- io_uring/io-wq.h | 1 - io_uring/io_uring.c | 149 +++++++++++----------- io_uring/io_uring.h | 17 +++ io_uring/io_uring_types.h | 109 +++++++++------- io_uring/kbuf.c | 33 +++++ io_uring/kbuf.h | 38 +----- io_uring/poll.c | 229 ++++++++++++++++++++++++---------- io_uring/poll.h | 3 +- io_uring/rsrc.c | 9 +- io_uring/tctx.c | 34 +++-- io_uring/tctx.h | 7 +- io_uring/timeout.c | 7 +- 16 files changed, 426 insertions(+), 257 deletions(-) -- 2.36.1