Re: [PATCH 7/9] io_uring: add per-task callback handler

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/20/20 4:12 PM, Jann Horn wrote:
> On Fri, Feb 21, 2020 at 12:00 AM Jens Axboe <axboe@xxxxxxxxx> wrote:
>> On 2/20/20 3:23 PM, Jann Horn wrote:
>>> On Thu, Feb 20, 2020 at 11:14 PM Jens Axboe <axboe@xxxxxxxxx> wrote:
>>>> On 2/20/20 3:02 PM, Jann Horn wrote:
>>>>> On Thu, Feb 20, 2020 at 9:32 PM Jens Axboe <axboe@xxxxxxxxx> wrote:
>>>>>> For poll requests, it's not uncommon to link a read (or write) after
>>>>>> the poll to execute immediately after the file is marked as ready.
>>>>>> Since the poll completion is called inside the waitqueue wake up handler,
>>>>>> we have to punt that linked request to async context. This slows down
>>>>>> the processing, and actually means it's faster to not use a link for this
>>>>>> use case.
> [...]
>>>>>> -static void io_poll_trigger_evfd(struct io_wq_work **workptr)
>>>>>> +static void io_poll_task_func(struct callback_head *cb)
>>>>>>  {
>>>>>> -       struct io_kiocb *req = container_of(*workptr, struct io_kiocb, work);
>>>>>> +       struct io_kiocb *req = container_of(cb, struct io_kiocb, sched_work);
>>>>>> +       struct io_kiocb *nxt = NULL;
>>>>>>
>>>>> [...]
>>>>>> +       io_poll_task_handler(req, &nxt);
>>>>>> +       if (nxt)
>>>>>> +               __io_queue_sqe(nxt, NULL);
>>>>>
>>>>> This can now get here from anywhere that calls schedule(), right?
>>>>> Which means that this might almost double the required kernel stack
>>>>> size, if one codepath exists that calls schedule() while near the
>>>>> bottom of the stack and another codepath exists that goes from here
>>>>> through the VFS and again uses a big amount of stack space? This is a
>>>>> somewhat ugly suggestion, but I wonder whether it'd make sense to
>>>>> check whether we've consumed over 25% of stack space, or something
>>>>> like that, and if so, directly punt the request.
> [...]
>>>>> Also, can we recursively hit this point? Even if __io_queue_sqe()
>>>>> doesn't *want* to block, the code it calls into might still block on a
>>>>> mutex or something like that, at which point the mutex code would call
>>>>> into schedule(), which would then again hit sched_out_update() and get
>>>>> here, right? As far as I can tell, this could cause unbounded
>>>>> recursion.
>>>>
>>>> The sched_work items are pruned before being run, so that can't happen.
>>>
>>> And is it impossible for new ones to be added in the meantime if a
>>> second poll operation completes in the background just when we're
>>> entering __io_queue_sqe()?
>>
>> True, that can happen.
>>
>> I wonder if we just prevent the recursion whether we can ignore most
>> of it. Eg never process the sched_work list if we're not at the top
>> level, so to speak.
>>
>> This should also prevent the deadlock that you mentioned with FUSE
>> in the next email that just rolled in.
> 
> But there the first ->read_iter could be from outside io_uring. So you
> don't just have to worry about nesting inside an already-running uring
> work; you also have to worry about nesting inside more or less
> anything else that might be holding mutexes. So I think you'd pretty
> much have to whitelist known-safe schedule() callers, or something
> like that.

I'll see if I can come up with something for that. Ideally any issue
with IOCB_NOWAIT set should be honored, and trylock etc should be used.
But I don't think we can fully rely on that, we need something a bit
more solid...

> Taking a step back: Do you know why this whole approach brings the
> kind of performance benefit you mentioned in the cover letter? 4x is a
> lot... Is it that expensive to take a trip through the scheduler?
> I wonder whether the performance numbers for the echo test would
> change if you commented out io_worker_spin_for_work()...

If anything, I expect the spin removal to make it worse. There's really
no magic there on why it's faster, if you offload work to a thread that
is essentially sync, then you're going to take a huge hit in
performance. It's the difference between:

1) Queue work with thread, wake up thread
2) Thread wakes, starts work, goes to sleep.
3) Data available, thread is woken, does work
4) Thread signals completion of work

versus just completing the work when it's ready and not having any
switches to a worker thread at all. As the cover letter mentions, the
single client case is a huge win, and that is of course the biggest win
because everything is idle. If the thread doing the offload can be kept
running, the gains become smaller as we're not paying those wake/sleep
penalties anymore.

-- 
Jens Axboe




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux