On 2/9/21 5:03 PM, Pavel Begunkov wrote: > Awhile there are requests in the allocation cache -- use them, only if > those ended go for the stashed memory in comp.free_list. As list > manipulation are generally heavy and are not good for caches, flush them > all or as much as can in one go. > > Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx> > --- > fs/io_uring.c | 29 +++++++++++++++++++---------- > 1 file changed, 19 insertions(+), 10 deletions(-) > > diff --git a/fs/io_uring.c b/fs/io_uring.c > index 64d3f3e2e93d..17194e0d62ff 100644 > --- a/fs/io_uring.c > +++ b/fs/io_uring.c > @@ -1953,25 +1953,34 @@ static inline void io_req_complete(struct io_kiocb *req, long res) > __io_req_complete(req, 0, res, 0); > } > > +static void io_flush_cached_reqs(struct io_submit_state *state) > +{ > + do { > + struct io_kiocb *req = list_first_entry(&state->comp.free_list, > + struct io_kiocb, compl.list); > + > + list_del(&req->compl.list); > + state->reqs[state->free_reqs++] = req; > + if (state->free_reqs == ARRAY_SIZE(state->reqs)) > + break; > + } while (!list_empty(&state->comp.free_list)); > +} > + > static struct io_kiocb *io_alloc_req(struct io_ring_ctx *ctx) > { > struct io_submit_state *state = &ctx->submit_state; > > BUILD_BUG_ON(IO_REQ_ALLOC_BATCH > ARRAY_SIZE(state->reqs)); > > - if (!list_empty(&state->comp.free_list)) { > - struct io_kiocb *req; > - > - req = list_first_entry(&state->comp.free_list, struct io_kiocb, > - compl.list); > - list_del(&req->compl.list); > - return req; > - } > - > if (!state->free_reqs) { > gfp_t gfp = GFP_KERNEL | __GFP_NOWARN; > int ret; > > + if (!list_empty(&state->comp.free_list)) { > + io_flush_cached_reqs(state); > + goto out; > + } I think that'd be cleaner as: if (io_flush_cached_reqs(state)) goto got_req; and have io_flush_cached_reqs() return true/false depending on what it did. -- Jens Axboe