Re: [PATCH 2/3] io_uring/msg_ring: avoid double indirection task_work for data messages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/28/24 7:18 AM, Pavel Begunkov wrote:
> On 5/24/24 23:58, Jens Axboe wrote:
>> If IORING_SETUP_SINGLE_ISSUER is set, then we can't post CQEs remotely
>> to the target ring. Instead, task_work is queued for the target ring,
>> which is used to post the CQE. To make matters worse, once the target
>> CQE has been posted, task_work is then queued with the originator to
>> fill the completion.
>>
>> This obviously adds a bunch of overhead and latency. Instead of relying
>> on generic kernel task_work for this, fill an overflow entry on the
>> target ring and flag it as such that the target ring will flush it. This
>> avoids both the task_work for posting the CQE, and it means that the
>> originator CQE can be filled inline as well.
>>
>> In local testing, this reduces the latency on the sender side by 5-6x.
>>
>> Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
>> ---
>>   io_uring/msg_ring.c | 77 +++++++++++++++++++++++++++++++++++++++++++--
>>   1 file changed, 74 insertions(+), 3 deletions(-)
>>
>> diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
>> index feff2b0822cf..3f89ff3a40ad 100644
>> --- a/io_uring/msg_ring.c
>> +++ b/io_uring/msg_ring.c
>> @@ -123,6 +123,69 @@ static void io_msg_tw_complete(struct callback_head *head)
>>       io_req_queue_tw_complete(req, ret);
>>   }
>>   +static struct io_overflow_cqe *io_alloc_overflow(struct io_ring_ctx *target_ctx)
>> +{
>> +    bool is_cqe32 = target_ctx->flags & IORING_SETUP_CQE32;
>> +    size_t cqe_size = sizeof(struct io_overflow_cqe);
>> +    struct io_overflow_cqe *ocqe;
>> +
>> +    if (is_cqe32)
>> +        cqe_size += sizeof(struct io_uring_cqe);
>> +
>> +    ocqe = kmalloc(cqe_size, GFP_ATOMIC | __GFP_ACCOUNT);
>> +    if (!ocqe)
>> +        return NULL;
>> +
>> +    if (is_cqe32)
>> +        ocqe->cqe.big_cqe[0] = ocqe->cqe.big_cqe[1] = 0;
>> +
>> +    return ocqe;
>> +}
>> +
>> +/*
>> + * Entered with the target uring_lock held, and will drop it before
>> + * returning. Adds a previously allocated ocqe to the overflow list on
>> + * the target, and marks it appropriately for flushing.
>> + */
>> +static void io_msg_add_overflow(struct io_msg *msg,
>> +                struct io_ring_ctx *target_ctx,
>> +                struct io_overflow_cqe *ocqe, int ret)
>> +    __releases(target_ctx->uring_lock)
>> +{
>> +    spin_lock(&target_ctx->completion_lock);
>> +
>> +    if (list_empty(&target_ctx->cq_overflow_list)) {
>> +        set_bit(IO_CHECK_CQ_OVERFLOW_BIT, &target_ctx->check_cq);
>> +        atomic_or(IORING_SQ_TASKRUN, &target_ctx->rings->sq_flags);
> 
> TASKRUN? The normal overflow path sets IORING_SQ_CQ_OVERFLOW

Was a bit split on it - we want it run as part of waiting, but I also
wasn't super interested in exposing it as an overflow condition since it
is now. It's more of an internal implementation detail.

-- 
Jens Axboe





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux