This looks much better now! One question and a minor comment: > @@ -2619,9 +2622,13 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events, > > if (get_timespec64(&ts, uts)) > return -EFAULT; > + > iowq.timeout = ktime_add_ns(timespec64_to_ktime(ts), ktime_get_ns()); Probably want to kill that extra added line, not worth respinning for obviously. > diff --git a/io_uring/napi.c b/io_uring/napi.c > new file mode 100644 > index 000000000000..a085122cae8b > --- /dev/null > +++ b/io_uring/napi.c > +void __io_napi_add(struct io_ring_ctx *ctx, struct file *file) > +{ > + unsigned int napi_id; > + struct socket *sock; > + struct sock *sk; > + struct io_napi_ht_entry *he; > + > + sock = sock_from_file(file); > + if (!sock) > + return; > + > + sk = sock->sk; > + if (!sk) > + return; > + > + napi_id = READ_ONCE(sk->sk_napi_id); > + > + /* Non-NAPI IDs can be rejected. */ > + if (napi_id < MIN_NAPI_ID) > + return; > + > + spin_lock(&ctx->napi_lock); > + hash_for_each_possible(ctx->napi_ht, he, node, napi_id) { > + if (he->napi_id == napi_id) { > + he->timeout = jiffies + NAPI_TIMEOUT; > + goto out; > + } > + } > + > + he = kmalloc(sizeof(*he), GFP_NOWAIT); > + if (!he) > + goto out; > + > + he->napi_id = napi_id; > + he->timeout = jiffies + NAPI_TIMEOUT; > + hash_add(ctx->napi_ht, &he->node, napi_id); > + > + list_add_tail(&he->list, &ctx->napi_list); > + > +out: > + spin_unlock(&ctx->napi_lock); > +} Didn't look into the details here just yet, but one thing occurred to me - would it be possible to rcu_read_lock() protect the hash for lookup? I would imagine that the ratio of successful lookups to "nope nothing found, need to alloc and insert" is quite high, and we could avoid the napi_lock for that case when just iterating the hash. Would obviously need rcu freeing of 'he' as well, and so forth. And some way to detect if 'he' is going away or not. But seems like it'd be doable without too much trouble? -- Jens Axboe