On 11/21/22 12:14?PM, Stefan Roesch wrote: > +/* > + * io_napi_add() - Add napi id to the busy poll list > + * @file: file pointer for socket > + * @ctx: io-uring context > + * > + * Add the napi id of the socket to the napi busy poll list. > + */ > +void io_napi_add(struct file *file, struct io_ring_ctx *ctx) > +{ > + unsigned int napi_id; > + struct socket *sock; > + struct sock *sk; > + struct io_napi_entry *ne; > + > + if (!io_napi_busy_loop_on(ctx)) > + return; > + > + sock = sock_from_file(file); > + if (!sock) > + return; > + > + sk = sock->sk; > + if (!sk) > + return; > + > + napi_id = READ_ONCE(sk->sk_napi_id); > + > + /* Non-NAPI IDs can be rejected */ > + if (napi_id < MIN_NAPI_ID) > + return; > + > + spin_lock(&ctx->napi_lock); > + list_for_each_entry(ne, &ctx->napi_list, list) { > + if (ne->napi_id == napi_id) { > + ne->timeout = jiffies + NAPI_TIMEOUT; > + goto out; > + } > + } > + > + ne = kmalloc(sizeof(*ne), GFP_NOWAIT); > + if (!ne) > + goto out; > + > + ne->napi_id = napi_id; > + ne->timeout = jiffies + NAPI_TIMEOUT; > + list_add_tail(&ne->list, &ctx->napi_list); > + > +out: > + spin_unlock(&ctx->napi_lock); > +} I think this all looks good now, just one minor comment on the above. Is the expectation here that we'll basically always add to the napi list? If so, then I think allocating 'ne' outside the spinlock would be a lot saner, and then just kfree() it for the unlikely case where we find a duplicate. -- Jens Axboe