Re: [PATCH 05/19] Add io_uring IO interface

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/12/19 3:03 PM, Jens Axboe wrote:
> On 2/12/19 2:42 PM, Jann Horn wrote:
>> On Sat, Feb 9, 2019 at 5:15 AM Jens Axboe <axboe@xxxxxxxxx> wrote:
>>> On 2/8/19 3:12 PM, Jann Horn wrote:
>>>> On Fri, Feb 8, 2019 at 6:34 PM Jens Axboe <axboe@xxxxxxxxx> wrote:
>>>>> The submission queue (SQ) and completion queue (CQ) rings are shared
>>>>> between the application and the kernel. This eliminates the need to
>>>>> copy data back and forth to submit and complete IO.
>>>>>
>>>>> IO submissions use the io_uring_sqe data structure, and completions
>>>>> are generated in the form of io_uring_cqe data structures. The SQ
>>>>> ring is an index into the io_uring_sqe array, which makes it possible
>>>>> to submit a batch of IOs without them being contiguous in the ring.
>>>>> The CQ ring is always contiguous, as completion events are inherently
>>>>> unordered, and hence any io_uring_cqe entry can point back to an
>>>>> arbitrary submission.
>>>>>
>>>>> Two new system calls are added for this:
>>>>>
>>>>> io_uring_setup(entries, params)
>>>>>         Sets up an io_uring instance for doing async IO. On success,
>>>>>         returns a file descriptor that the application can mmap to
>>>>>         gain access to the SQ ring, CQ ring, and io_uring_sqes.
>>>>>
>>>>> io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
>>>>>         Initiates IO against the rings mapped to this fd, or waits for
>>>>>         them to complete, or both. The behavior is controlled by the
>>>>>         parameters passed in. If 'to_submit' is non-zero, then we'll
>>>>>         try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
>>>>>         kernel will wait for 'min_complete' events, if they aren't
>>>>>         already available. It's valid to set IORING_ENTER_GETEVENTS
>>>>>         and 'min_complete' == 0 at the same time, this allows the
>>>>>         kernel to return already completed events without waiting
>>>>>         for them. This is useful only for polling, as for IRQ
>>>>>         driven IO, the application can just check the CQ ring
>>>>>         without entering the kernel.
>>>>>
>>>>> With this setup, it's possible to do async IO with a single system
>>>>> call. Future developments will enable polled IO with this interface,
>>>>> and polled submission as well. The latter will enable an application
>>>>> to do IO without doing ANY system calls at all.
>>>>>
>>>>> For IRQ driven IO, an application only needs to enter the kernel for
>>>>> completions if it wants to wait for them to occur.
>>>>>
>>>>> Each io_uring is backed by a workqueue, to support buffered async IO
>>>>> as well. We will only punt to an async context if the command would
>>>>> need to wait for IO on the device side. Any data that can be accessed
>>>>> directly in the page cache is done inline. This avoids the slowness
>>>>> issue of usual threadpools, since cached data is accessed as quickly
>>>>> as a sync interface.
>> [...]
>>>>> +static int io_submit_sqe(struct io_ring_ctx *ctx, const struct sqe_submit *s)
>>>>> +{
>>>>> +       struct io_kiocb *req;
>>>>> +       ssize_t ret;
>>>>> +
>>>>> +       /* enforce forwards compatibility on users */
>>>>> +       if (unlikely(s->sqe->flags))
>>>>> +               return -EINVAL;
>>>>> +
>>>>> +       req = io_get_req(ctx);
>>>>> +       if (unlikely(!req))
>>>>> +               return -EAGAIN;
>>>>> +
>>>>> +       req->rw.ki_filp = NULL;
>>>>> +
>>>>> +       ret = __io_submit_sqe(ctx, req, s, true);
>>>>> +       if (ret == -EAGAIN) {
>>>>> +               memcpy(&req->submit, s, sizeof(*s));
>>>>> +               INIT_WORK(&req->work, io_sq_wq_submit_work);
>>>>> +               queue_work(ctx->sqo_wq, &req->work);
>>>>> +               ret = 0;
>>>>> +       }
>>>>> +       if (ret)
>>>>> +               io_free_req(req);
>>>>> +
>>>>> +       return ret;
>>>>> +}
>>>>> +
>>>>> +static void io_commit_sqring(struct io_ring_ctx *ctx)
>>>>> +{
>>>>> +       struct io_sq_ring *ring = ctx->sq_ring;
>>>>> +
>>>>> +       if (ctx->cached_sq_head != ring->r.head) {
>>>>> +               WRITE_ONCE(ring->r.head, ctx->cached_sq_head);
>>>>> +               /* write side barrier of head update, app has read side */
>>>>> +               smp_wmb();
>>>>
>>>> Can you elaborate on what this memory barrier is doing? Don't you need
>>>> some sort of memory barrier *before* the WRITE_ONCE(), to ensure that
>>>> nobody sees the updated head before you're done reading the submission
>>>> queue entry? Or is that barrier elsewhere?
>>>
>>> The matching read barrier is in the application, it must do that before
>>> reading ->head for the SQ ring.
>>>
>>> For the other barrier, since the ring->r.head now has a READ_ONCE(),
>>> that should be all we need to ensure that loads are done.
>>
>> READ_ONCE() / WRITE_ONCE are not hardware memory barriers that enforce
>> ordering with regard to concurrent execution on other cores. They are
>> only compiler barriers, influencing the order in which the compiler
>> emits things. (Well, unless you're on alpha, where READ_ONCE() implies
>> a memory barrier that prevents reordering of dependent reads.)
>>
>> As far as I can tell, between the READ_ONCE(ring->array[...]) in
>> io_get_sqring() and the WRITE_ONCE() in io_commit_sqring(), you have
>> no *hardware* memory barrier that prevents reordering against
>> concurrently running userspace code. As far as I can tell, the
>> following could happen:
>>
>>  - The kernel reads from ring->array in io_get_sqring(), then updates
>> the head in io_commit_sqring(). The CPU reorders the memory accesses
>> such that the write to the head becomes visible before the read from
>> ring->array has completed.
>>  - Userspace observes the write to the head and reuses the array slots
>> the kernel has freed with the write, clobbering ring->array before the
>> kernel reads from ring->array.
> 
> I'd say this is highly theoretical for the normal use case, as we
> will have submitted IO in between. Hence the load must have been done.
> The only case that needs it is the sq thread case, since we bundle
> those up. This should do it:

Actually, I take that back, as in this particular case the sq thread
is the only one that reads it. Hence it'll have done a full submission
of the read SQE entries before reading a new round. Not that it matters
for that case, as a preempt would have implied a full barrier anyway.

The non-sq thread case does not need the store-vs-load ordering
barrier, as SQEs are either discarded or submitted before we commit
the sqring. Since that's the case, by definition all loads are done.

-- 
Jens Axboe




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux