Re: io_uring and spurious wake-ups from eventfd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/8/20 12:36 AM, Mark Papadakis wrote:
> 
> 
>> On 7 Jan 2020, at 10:34 PM, Jens Axboe <axboe@xxxxxxxxx> wrote:
>>
>> On 1/7/20 1:26 PM, Jens Axboe wrote:
>>> On 1/7/20 8:55 AM, Mark Papadakis wrote:
>>>> This is perhaps an odd request, but if it’s trivial to implement
>>>> support for this described feature, it could help others like it ‘d
>>>> help me (I ‘ve been experimenting with io_uring for some time now).
>>>>
>>>> Being able to register an eventfd with an io_uring context is very
>>>> handy, if you e.g have some sort of reactor thread multiplexing I/O
>>>> using epoll etc, where you want to be notified when there are pending
>>>> CQEs to drain. The problem, such as it is, is that this can result in
>>>> un-necessary/spurious wake-ups.
>>>>
>>>> If, for example, you are monitoring some sockets for EPOLLIN, and when
>>>> poll says you have pending bytes to read from their sockets, and said
>>>> sockets are non-blocking, and for each some reported event you reserve
>>>> an SQE for preadv() to read that data and then you io_uring_enter to
>>>> submit the SQEs, because the data is readily available, as soon as
>>>> io_uring_enter returns, you will have your completions available -
>>>> which you can process.  The “problem” is that poll will wake up
>>>> immediately thereafter in the next reactor loop iteration because
>>>> eventfd was tripped (which is reasonable but un-necessary).
>>>>
>>>> What if there was a flag for io_uring_setup() so that the eventfd
>>>> would only be tripped for CQEs that were processed asynchronously, or,
>>>> if that’s non-trivial, only for CQEs that reference file FDs?
>>>>
>>>> That’d help with that spurious wake-up.
>>>
>>> One easy way to do that would be for the application to signal that it
>>> doesn't want eventfd notifications for certain requests. Like using an
>>> IOSQE_ flag for that. Then you could set that on the requests you submit
>>> in response to triggering an eventfd event.
>>
> 
> 
> Thanks Jens,
> 
> This is great, but perhaps there is a somewhat slightly more optimal
> way to do this.  Ideally, io_uring should trip the eventfd if there
> are any new completions available, that haven’t been produced In the
> context of an io_uring_enter(). That is to say, if any SQEs can be
> immediately served (because data is readily available in
> Buffers/caches in the kernel), then their respective CQEs will be
> produced in the context of that io_uring_enter() that submitted said
> SQEs(and thus the CQEs can be processed immediately after
> io_uring_enter() returns).  So, if any CQEs are placed in the
> respective ring at any other time, but not during an io_uring_enter()
> call, then it means those completions were produced asynchronously,
> and thus the eventfd can be tripped, otherwise, there is no need to
> trip the eventfd at all.
> 
> e.g (pseudocode):
> void produce_completion(cfq_ctx *ctx, const bool in_io_uring_enter_ctx) {
>         cqe_ring_push(cqe_from_ctx(ctx));
>         if (false == in_io_uring_enter_ctx && eventfd_registered()) {
>                 trip_iouring_eventfd();
>         } else {
>                 // don't bother
>         }
> }

I see what you're saying, so essentially only trigger eventfd
notifications if the completions happen async. That does make a lot of
sense, and it would be cleaner than having to flag this per request as
well. I think we'd still need to make that opt-in as it changes the
behavior of it.

The best way to do that would be to add IORING_REGISTER_EVENTFD_ASYNC or
something like that. Does the exact same thing as
IORING_REGISTER_EVENTFD, but only triggers it if completions happen
async.

What do you think?

-- 
Jens Axboe




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux