Problem about sq->khead update and ring full judgement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all:
    I'm using io_uring in a program, with SQPOLL feature enabled. The userspace program will actively count the queue status of urings, the programming model is similar to:
    {
        sqe = io_uring_get_sqe();
        if(sqe){
            /* prepare next request */
            queue_count++;
    }


    {
        cqe = io_uring_peek_cqe();
        if(cqe){
            queue_count--;
        }
    }

    In this way, maybe we can assume that " sq_ring_size - queue_count = sqe_left_we_can_alloc "?

    Userspace program will compare queue_count with io_uring's sq_size : if queue is not full ( queue_count < sq_size), it will try getting new sqe(means Initiating a new IO request).

    Now I'm currently coming into a situation where  I/O is very high —— Userspace program submit lots of sqes (around 2000) at the same time, meanwhile  sq_ring_size is 4096. In kernel, __io_sq_thread->io_submit_sqes(), I see that nr(to_submit) is also over 2000.   At a point, a strange point comes out: Userspace program find sq_ring is not full, but Kernel(in fact liburing::io_uring_get_sqe) think sq_ring is full.

    After analyzing, I find the reason is: kernel update "sq->khead" after submitting "all" sqes. The running of my program is : Before kernel update khead, userspace program has received many cqes, causing queue_count-- . After decreasing queue_count, user-program thinks sq_ring is not full, and try to start new IO requeust. As sq->head is not updated, io_uring_get_sqe() returns NULL.

    My questions are:

    1. Is userspace 'queue_count' judgement reasonable? From 'traditional' point of view, if we want to find sq_ring full or not, we can just use io_uring_get_sqe() to check. Maybe this discussion is similar to this issue in a way: https://github.com/axboe/liburing/issues/88

    2. I must confess that it's very strange that cqe's average receiving latency is shorter than the consumption time of sqe(Now it really happens and I'm trying to find why this happened, or it's a just some bug). Assuming this scenario is reasonable, it seems that we can update sq->khead more often to get higher throughput. When userspace gets more 'sensetive', it can send out more IO requests. And it won't cause much more overhead if kernel atomically update 'khead' a-little-bit more.








[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux