Re: [PATCH 7/7] io_uring/epoll: add support for IORING_OP_EPOLL_WAIT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/7/25 17:32, Jens Axboe wrote:
For existing epoll event loops that can't fully convert to io_uring,
the used approach is usually to add the io_uring fd to the epoll
instance and use epoll_wait() to wait on both "legacy" and io_uring
events. While this work, it isn't optimal as:

1) epoll_wait() is pretty limited in what it can do. It does not support
    partial reaping of events, or waiting on a batch of events.

2) When an io_uring ring is added to an epoll instance, it activates the
    io_uring "I'm being polled" logic which slows things down.

Rather than use this approach, with EPOLL_WAIT support added to io_uring,
event loops can use the normal io_uring wait logic for everything, as
long as an epoll wait request has been armed with io_uring.

Note that IORING_OP_EPOLL_WAIT does NOT take a timeout value, as this
is an async request. Waiting on io_uring events in general has various
timeout parameters, and those are the ones that should be used when
waiting on any kind of request. If events are immediately available for
reaping, then This opcode will return those immediately. If none are
available, then it will post an async completion when they become
available.

cqe->res will contain either an error code (< 0 value) for a malformed
request, invalid epoll instance, etc. It will return a positive result
indicating how many events were reaped.

IORING_OP_EPOLL_WAIT requests may be canceled using the normal io_uring
cancelation infrastructure. The poll logic for managing ownership is
adopted to guard the epoll side too.

Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
---
  include/linux/io_uring_types.h |   4 +
  include/uapi/linux/io_uring.h  |   1 +
  io_uring/cancel.c              |   5 ++
  io_uring/epoll.c               | 143 +++++++++++++++++++++++++++++++++
  io_uring/epoll.h               |  22 +++++
  io_uring/io_uring.c            |   5 ++
  io_uring/opdef.c               |  14 ++++
  7 files changed, 194 insertions(+)

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index e2fef264ff8b..031ba708a81d 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -369,6 +369,10 @@ struct io_ring_ctx {
...
+bool io_epoll_wait_remove_all(struct io_ring_ctx *ctx, struct io_uring_task *tctx,
+			      bool cancel_all)
+{
+	return io_cancel_remove_all(ctx, tctx, &ctx->epoll_list, cancel_all, __io_epoll_wait_cancel);
+}
+
+int io_epoll_wait_cancel(struct io_ring_ctx *ctx, struct io_cancel_data *cd,
+			 unsigned int issue_flags)
+{
+	return io_cancel_remove(ctx, cd, issue_flags, &ctx->epoll_list, __io_epoll_wait_cancel);
+}
+
+static void io_epoll_retry(struct io_kiocb *req, struct io_tw_state *ts)
+{
+	int v;
+
+	do {
+		v = atomic_read(&req->poll_refs);
+		if (unlikely(v != 1)) {
+			if (WARN_ON_ONCE(!(v & IO_POLL_REF_MASK)))
+				return;
+			if (v & IO_POLL_CANCEL_FLAG) {
+				__io_epoll_cancel(req);
+				return;
+			}
+		}
+		v &= IO_POLL_REF_MASK;
+	} while (atomic_sub_return(v, &req->poll_refs) & IO_POLL_REF_MASK);

I actually looked up the epoll code this time. If we disregard
cancellations, you have only 1 wait entry, which should've been removed
from the queue by io_epoll_wait_fn(), in which case the entire loop is
doing nothing as there is no one to race with. ->hash_node is the only
shared part, but it's sync'ed by the mutex.

As for cancellation, epoll_wait_remove() also removes the entry, and
you can rely on it to tell if the entry was removed inside, from
which you derive if you're the current owner.

Maybe this handling might be useful for the multishot mode, perhaps
along the lines of:

io_epoll_retry()
{
	do {
		res = epoll_get_events();
		if (one_shot || cancel) {
			wq_remove();
			unhash();
			complete_req(res);
			return;
		}

		post_cqe(res);

		// now recheck if new events came while we were processing
		// the previous batch.
	} while (refs_drop(req->poll_refs));
}

epoll_issue(issue_flags) {
	queue_poll();
	return;
}

But it might be better to just poll the epoll fd, reuse all the
io_uring polling machinery, and implement IO_URING_F_MULTISHOT for
the epoll opcode.

epoll_issue(issue_flags) {
	if (!(flags & IO_URING_F_MULTISHOT))
		return -EAGAIN;

	res = epoll_check_events();
	post_cqe(res);
	etc.
}

I think that would make this patch quite trivial, including
the multishot mode.

--
Pavel Begunkov





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux