On 3/18/22 13:52, Pavel Begunkov wrote:
A WIP feature optimising out CQEs posting spinlocking for some use cases. For a more detailed description see 4/4. Quick benchmarking with fio/t/io_uring nops gives extra 4% to throughput for QD=1, and ~+2.5% for QD=4.
Non-io_uring overhead (syscalls + userspace) takes ~60% of all execution time, so the percentage should quite depend on the CPU and the kernel config. Likely to be more than 4% for a faster setup. fwiw, was also usingIORING_ENTER_REGISTERED_RING, if it's not yet included in the upstream version of the tool. Also, want to play after to see if we can also avoid taking uring_lock.
Pavel Begunkov (4): io_uring: get rid of raw fill cqe in kill_timeout io_uring: get rid of raw fill_cqe in io_fail_links io_uring: remove raw fill_cqe from linked timeout io_uring: optimise compl locking for non-shared rings fs/io_uring.c | 126 ++++++++++++++++++++++------------ include/uapi/linux/io_uring.h | 1 + 2 files changed, 85 insertions(+), 42 deletions(-)
-- Pavel Begunkov