A WIP feature optimising out CQEs posting spinlocking for some use cases. For a more detailed description see 4/4. Quick benchmarking with fio/t/io_uring nops gives extra 4% to throughput for QD=1, and ~+2.5% for QD=4. Pavel Begunkov (4): io_uring: get rid of raw fill cqe in kill_timeout io_uring: get rid of raw fill_cqe in io_fail_links io_uring: remove raw fill_cqe from linked timeout io_uring: optimise compl locking for non-shared rings fs/io_uring.c | 126 ++++++++++++++++++++++------------ include/uapi/linux/io_uring.h | 1 + 2 files changed, 85 insertions(+), 42 deletions(-) -- 2.35.1