Hi, A ring setup with with IORING_SETUP_SINGLE_ISSUER, which is required to use IORING_SETUP_DEFER_TASKRUN, will need two round trips through generic task_work. This isn't ideal. This patchset attempts to rectify that, taking a new approach rather than trying to use the io_uring task_work infrastructure to handle it as in previous postings. In a sample test app that has one thread send messages to another and logging both the time between sender sending and receiver receving and just the time for the sender to post a message and get the CQE back, I see the following sender latencies with the stock kernel: Latencies for: Sender percentiles (nsec): | 1.0000th=[ 4384], 5.0000th=[ 4512], 10.0000th=[ 4576], | 20.0000th=[ 4768], 30.0000th=[ 4896], 40.0000th=[ 5024], | 50.0000th=[ 5088], 60.0000th=[ 5152], 70.0000th=[ 5280], | 80.0000th=[ 5344], 90.0000th=[ 5536], 95.0000th=[ 5728], | 99.0000th=[ 8032], 99.5000th=[18048], 99.9000th=[21376], | 99.9500th=[26496], 99.9900th=[59136] and with the patches: Latencies for: Sender percentiles (nsec): | 1.0000th=[ 756], 5.0000th=[ 820], 10.0000th=[ 828], | 20.0000th=[ 844], 30.0000th=[ 852], 40.0000th=[ 852], | 50.0000th=[ 860], 60.0000th=[ 860], 70.0000th=[ 868], | 80.0000th=[ 884], 90.0000th=[ 964], 95.0000th=[ 988], | 99.0000th=[ 1128], 99.5000th=[ 1208], 99.9000th=[ 1544], | 99.9500th=[ 1944], 99.9900th=[ 2896] For the receiving side the win is smaller as it only "suffers" from a single generic task_work, about a 10% win in latencies there. The idea here is to utilize the CQE overflow infrastructure for this, as that allows the right task to post the CQE to the ring. 1 is a basic refactoring prep patch, patch 2 adds support for normal messages, and patch 3 adopts the same approach for fd passing. io_uring/msg_ring.c | 151 ++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 138 insertions(+), 13 deletions(-) -- Jens Axboe