On 2/7/25 5:38 AM, Pavel Begunkov wrote: > On 2/4/25 19:46, Jens Axboe wrote: >> For existing epoll event loops that can't fully convert to io_uring, >> the used approach is usually to add the io_uring fd to the epoll >> instance and use epoll_wait() to wait on both "legacy" and io_uring >> events. While this work, it isn't optimal as: >> >> 1) epoll_wait() is pretty limited in what it can do. It does not support >> partial reaping of events, or waiting on a batch of events. >> >> 2) When an io_uring ring is added to an epoll instance, it activates the >> io_uring "I'm being polled" logic which slows things down. >> >> Rather than use this approach, with EPOLL_WAIT support added to io_uring, >> event loops can use the normal io_uring wait logic for everything, as >> long as an epoll wait request has been armed with io_uring. >> >> Note that IORING_OP_EPOLL_WAIT does NOT take a timeout value, as this >> is an async request. Waiting on io_uring events in general has various >> timeout parameters, and those are the ones that should be used when >> waiting on any kind of request. If events are immediately available for >> reaping, then This opcode will return those immediately. If none are >> available, then it will post an async completion when they become >> available. >> >> cqe->res will contain either an error code (< 0 value) for a malformed >> request, invalid epoll instance, etc. It will return a positive result >> indicating how many events were reaped. >> >> IORING_OP_EPOLL_WAIT requests may be canceled using the normal io_uring >> cancelation infrastructure. The poll logic for managing ownership is >> adopted to guard the epoll side too. > ... >> diff --git a/io_uring/epoll.c b/io_uring/epoll.c >> index 7848d9cc073d..5a47f0cce647 100644 >> --- a/io_uring/epoll.c >> +++ b/io_uring/epoll.c > ... >> +static void io_epoll_retry(struct io_kiocb *req, struct io_tw_state *ts) >> +{ >> + int v; >> + >> + do { >> + v = atomic_read(&req->poll_refs); >> + if (unlikely(v != 1)) { >> + if (WARN_ON_ONCE(!(v & IO_POLL_REF_MASK))) >> + return; >> + if (v & IO_POLL_CANCEL_FLAG) { >> + __io_epoll_cancel(req); >> + return; >> + } >> + if (v & IO_POLL_FINISH_FLAG) >> + return; >> + } >> + v &= IO_POLL_REF_MASK; >> + } while (atomic_sub_return(v, &req->poll_refs) & IO_POLL_REF_MASK); > > I haven't looked deep into the set, but this loop looks very > suspicious. The entire purpose of the twin loop in poll.c is > not to lose events while doing processing, which is why the > processing happens before the decrement... > >> + io_req_task_submit(req, ts); > > Maybe the issue is supposed to handle that, but this one is > not allowed unless you fully unhash all the polling. Once you > dropped refs the poll wait entry feels free to claim the request, > and, for example, queue a task work, and io_req_task_submit() > would decide to queue it as well. It's likely not the only > race that can happen. I'm going to send out a new version with the multishot support dropped for now as it both a) simplifies the series, and b) I'm not super convinced it can be sanely used. In any case, it can be left for later. Once I do that, please take a look at the ownership side and let's continue the discussion there! -- Jens Axboe