On 9/25/21 8:58 PM, Noah Goldstein wrote: > On Fri, Sep 24, 2021 at 5:45 PM Pavel Begunkov <asml.silence@xxxxxxxxx> [...] >> + if (unlikely(!nr_events)) >> + return 0; >> + >> + io_commit_cqring(ctx); >> + io_cqring_ev_posted_iopoll(ctx); >> + list.first = start ? start->next : ctx->iopoll_list.first; >> + list.last = prev; >> wq_list_cut(&ctx->iopoll_list, prev, start); >> - if (nr_events) >> - io_iopoll_complete(ctx, &done); >> + io_free_batch_list(ctx, &list); >> > > If it's logically feasible it may be slightly faster on speculative machines > to pass `nr_events` to `io_free_batch_list` so instead of having the loop > condition on `node` you can use the counter and hopefully recover from > the branch miss at the end of the loop before current execution catches up. > > return nr_events; May be. We can experiment afterward and see if the numbers get better -- Pavel Begunkov