On 3/24/22 8:42 AM, Jens Axboe wrote: > On 3/24/22 8:34 AM, Dylan Yudaken wrote: >> Do not set REQ_F_NOWAIT if the socket is non blocking. When enabled this >> causes the accept to immediately post a CQE with EAGAIN, which means you >> cannot perform an accept SQE on a NONBLOCK socket asynchronously. >> >> By removing the flag if there is no pending accept then poll is armed as >> usual and when a connection comes in the CQE is posted. >> >> note: If multiple accepts are queued up, then when a single connection >> comes in they all complete, one with the connection, and the remaining >> with EAGAIN. This could be improved in the future but will require a lot >> of io_uring changes. > > Not true - all you'd need to do is have behavior similar to > EPOLLEXCLUSIVE, which we already support for separate poll. Could be > done for internal poll quite easily, and _probably_ makes sense to do by > default for most cases in fact. Quick wire-up below. Not tested at all, but really should basically as simple as this. diff --git a/fs/io_uring.c b/fs/io_uring.c index 4d98cc820a5c..8dfacb476726 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -967,6 +967,7 @@ struct io_op_def { /* set if opcode supports polled "wait" */ unsigned pollin : 1; unsigned pollout : 1; + unsigned poll_exclusive : 1; /* op supports buffer selection */ unsigned buffer_select : 1; /* do prep async if is going to be punted */ @@ -1061,6 +1062,7 @@ static const struct io_op_def io_op_defs[] = { .needs_file = 1, .unbound_nonreg_file = 1, .pollin = 1, + .poll_exclusive = 1, }, [IORING_OP_ASYNC_CANCEL] = { .audit_skip = 1, @@ -6293,6 +6295,8 @@ static int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags) } else { mask |= POLLOUT | POLLWRNORM; } + if (def->poll_exclusive) + mask |= EPOLLEXCLUSIVE; if (!(issue_flags & IO_URING_F_UNLOCKED) && !list_empty(&ctx->apoll_cache)) { -- Jens Axboe