Re: [RFC 0/2] optimise local-tw task resheduling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/11/23 17:24, Jens Axboe wrote:
On 3/10/23 12:04?PM, Pavel Begunkov wrote:
io_uring extensively uses task_work, but when a task is waiting
for multiple CQEs it causes lots of rescheduling. This series
is an attempt to optimise it and be a base for future improvements.

For some zc network tests eventually waiting for a portion of
buffers I've got 10x descrease in the number of context switches,
which reduced the CPU consumption more than twice (17% -> 8%).
It also helps storage cases, while running fio/t/io_uring against
a low performant drive it got 2x descrease of the number of context
switches for QD8 and ~4 times for QD32.

Not for inclusion yet, I want to add an optimisation for when
waiting for 1 CQE.

Ran this on the usual peak benchmark, using IRQ. IOPS is around ~70M for
that, and I see context rates of around 8.1-8.3M/sec with the current
kernel.

Tried it out. No difference with bs=512, qd=4 is completed before
it gets to schedule() in io_cqring_wait(). With QD32, it's local tw run
__io_run_local_work() spins 2 loops, and QD=8 somewhat in the middle
with rare extra sched.

For bs=4096 QD=8 I see a lot of:

io_cqring_wait() @min_events=8
schedule()
__io_run_local_work() nr=4
schedule()
__io_run_local_work() nr=4


And if we benchmark without and with the patch there is a nice
CPU util reduction.

CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
  0    1.18    0.00   19.24    0.00    0.00    0.00    0.00    0.00    0.00   79.57
  0    1.63    0.00   29.38    0.00    0.00    0.00    0.00    0.00    0.00   68.98

--
Pavel Begunkov



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux