Hi Pavel On Fri, Mar 10, 2023 at 07:04:14PM +0000, Pavel Begunkov wrote: > io_uring extensively uses task_work, but when a task is waiting > for multiple CQEs it causes lots of rescheduling. This series > is an attempt to optimise it and be a base for future improvements. > > For some zc network tests eventually waiting for a portion of > buffers I've got 10x descrease in the number of context switches, > which reduced the CPU consumption more than twice (17% -> 8%). > It also helps storage cases, while running fio/t/io_uring against > a low performant drive it got 2x descrease of the number of context > switches for QD8 and ~4 times for QD32. ublk uses io_uring_cmd_complete_in_task()(io_req_task_work_add()) heavily. So I tried this patchset, looks not see obvious change on both IOPS and context switches when running 't/io_uring /dev/ublkb0', and it is one null ublk target(ublk add -t null -z -u 1 -q 2), IOPS is ~2.8M. But ublk applies batch schedule similar with io_uring before calling io_uring_cmd_complete_in_task(). thanks, Ming