Re: [RFC 0/2] optimise local-tw task resheduling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/12/23 15:31, Jens Axboe wrote:
On 3/11/23 1:53?PM, Pavel Begunkov wrote:
On 3/11/23 20:45, Pavel Begunkov wrote:
On 3/11/23 17:24, Jens Axboe wrote:
On 3/10/23 12:04?PM, Pavel Begunkov wrote:
io_uring extensively uses task_work, but when a task is waiting
for multiple CQEs it causes lots of rescheduling. This series
is an attempt to optimise it and be a base for future improvements.

For some zc network tests eventually waiting for a portion of
buffers I've got 10x descrease in the number of context switches,
which reduced the CPU consumption more than twice (17% -> 8%).
It also helps storage cases, while running fio/t/io_uring against
a low performant drive it got 2x descrease of the number of context
switches for QD8 and ~4 times for QD32.

Not for inclusion yet, I want to add an optimisation for when
waiting for 1 CQE.

Ran this on the usual peak benchmark, using IRQ. IOPS is around ~70M for
that, and I see context rates of around 8.1-8.3M/sec with the current
kernel.

Applied the two patches, but didn't see much of a change? Performance is
about the same, and cx rate ditto. Confused... As you probably know,
this test waits for 32 ios at the time.

If I'd to guess it already has perfect batching, for which case
the patch does nothing. Maybe it's due to SSD coalescing +
small ro I/O + consistency and small latencies of Optanes,
or might be on the scheduling and the kernel side to be slow
to react.

And if that's that, I have to note that it's quite a sterile
case, the last time I asked the usual batching we're currently
getting for networking cases is 1-2.

I can definitely see this being very useful for the more
non-deterministic cases where "completions" come in more sporadically.
But for the networking case, if this is eg receives, you'd trigger the
wakeup anyway to do the actual receive? And then the cqe posting doesn't
trigger another wakeup.

True, In my case zc send notifications were the culprit.

It's not in the series, it might be better to not wake eagerly recv
poll tw, it'll give time to accumulate more data. I'm a bit afraid
of exhausting recv queues this way, so I don't think it's applicable
by default.

--
Pavel Begunkov



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux