When benchmarking a echo server using io_uring with force async to spawn io workers and without IORING_SETUP_DEFER_TASKRUN, I saw a small contention in tctx->task_list in the io_worker_handle_work loop. After handling work, we call free_work, which will add a task work to the shared tctx->task_list. The idea of this patchset is that each io worker will queue the freed works in its local list and batches multiple free work in one call when the number of freed works reaches IO_REQ_ALLOC_BATCH. ========= Benchmark ========= Setup: - Host: Intel(R) Core(TM) i7-10750H CPU with 12 CPUs - Guest: Qemu KVM with 4 vCPUs, multiqueues virtio-net (each vCPUs has its own tx/rx pair) - Test source code: https://github.com/minhbq-99/toy-echo-server In guest machine, run the `./io_uring_server -a` (number of unbound io worker is limited to number of CPUs, 4 in the benchmark environment). In host machine, run `for i in $(seq 1 10); do ./client --client 8 --packet.size 2000 --duration 30s -ip 192.168.31.3; done;`. This will create 8 TCP client sockets. Result: - Before: 55,885.56 +- 1,782.51 req/s - After: 59,926.25 +- 312.60 req/s (+7.23%) Though the result shows the difference is statistically significant, the improvement is quite small. I really appreciate any further suggestions. Thanks, Quang Minh. Bui Quang Minh (2): io_uring: make io_req_normal_work_add accept a list of requests io_uring/io-wq: try to batch multiple free work io_uring/io-wq.c | 62 +++++++++++++++++++++++++++++++++++++++++++-- io_uring/io-wq.h | 4 ++- io_uring/io_uring.c | 36 +++++++++++++++++++------- io_uring/io_uring.h | 8 +++++- 4 files changed, 97 insertions(+), 13 deletions(-) -- 2.43.0