Re: [PATCHSET 0/3] Improve MSG_RING SINGLE_ISSUER performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/24/24 23:58, Jens Axboe wrote:
Hi,

A ring setup with with IORING_SETUP_SINGLE_ISSUER, which is required to

IORING_SETUP_SINGLE_ISSUER has nothing to do with it, it's
specifically an IORING_SETUP_DEFER_TASKRUN optimisation.

use IORING_SETUP_DEFER_TASKRUN, will need two round trips through
generic task_work. This isn't ideal. This patchset attempts to rectify
that, taking a new approach rather than trying to use the io_uring
task_work infrastructure to handle it as in previous postings.

Not sure why you'd want to piggyback onto overflows, it's not
such a well made and reliable infra, whereas the DEFER_TASKRUN
part of the task_work approach was fine.

The completion path doesn't usually look at the overflow list
but on cached cqe pointers showing the CQ is full, that means
after you queue an overflow someone may post a CQE in the CQ
in the normal path and you get reordering. Not that bad
considering it's from another ring, but a bit nasty and surely
will bite us back in the future, it always does.

That's assuming you decide io_msg_need_remote() solely based
->task_complete and remove

	return current != target_ctx->submitter_task;

otherwise you can get two linked msg_ring target CQEs reordered.

It's also duplicating that crappy overflow code nobody cares
much about, and it's still a forced wake up with no batching,
circumventing the normal wake up path, i.e. io_uring tw.

I don't think we should care about the request completion
latency (sender latency), people should be more interested
in the reaction speed (receiver latency), but if you care
about it for a reason, perhaps you can just as well allocate
a new request and complete the previous one right away.

In a sample test app that has one thread send messages to another and
logging both the time between sender sending and receiver receving and
just the time for the sender to post a message and get the CQE back,
I see the following sender latencies with the stock kernel:

Latencies for: Sender
     percentiles (nsec):
      |  1.0000th=[ 4384],  5.0000th=[ 4512], 10.0000th=[ 4576],
      | 20.0000th=[ 4768], 30.0000th=[ 4896], 40.0000th=[ 5024],
      | 50.0000th=[ 5088], 60.0000th=[ 5152], 70.0000th=[ 5280],
      | 80.0000th=[ 5344], 90.0000th=[ 5536], 95.0000th=[ 5728],
      | 99.0000th=[ 8032], 99.5000th=[18048], 99.9000th=[21376],
      | 99.9500th=[26496], 99.9900th=[59136]

and with the patches:

Latencies for: Sender
     percentiles (nsec):
      |  1.0000th=[  756],  5.0000th=[  820], 10.0000th=[  828],
      | 20.0000th=[  844], 30.0000th=[  852], 40.0000th=[  852],
      | 50.0000th=[  860], 60.0000th=[  860], 70.0000th=[  868],
      | 80.0000th=[  884], 90.0000th=[  964], 95.0000th=[  988],
      | 99.0000th=[ 1128], 99.5000th=[ 1208], 99.9000th=[ 1544],
      | 99.9500th=[ 1944], 99.9900th=[ 2896]

For the receiving side the win is smaller as it only "suffers" from
a single generic task_work, about a 10% win in latencies there.

The idea here is to utilize the CQE overflow infrastructure for this,
as that allows the right task to post the CQE to the ring.

1 is a basic refactoring prep patch, patch 2 adds support for normal
messages, and patch 3 adopts the same approach for fd passing.

  io_uring/msg_ring.c | 151 ++++++++++++++++++++++++++++++++++++++++----
  1 file changed, 138 insertions(+), 13 deletions(-)


--
Pavel Begunkov




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux