On 12/7/21 2:39 AM, Hao Xu wrote: > In previous patches, we have already gathered some tw with > io_req_task_complete() as callback in prior_task_list, let's complete > them in batch while we cannot grab uring lock. In this way, we batch > the req_complete_post path. > > Tested-by: Pavel Begunkov <asml.silence@xxxxxxxxx> > Signed-off-by: Hao Xu <haoxu@xxxxxxxxxxxxxxxxx> > --- > > Hi Pavel, > May I add the above Test-by tag here? When you fold in Pavel's fixed, please also address the below. > fs/io_uring.c | 70 +++++++++++++++++++++++++++++++++++++++++++-------- > 1 file changed, 60 insertions(+), 10 deletions(-) > > diff --git a/fs/io_uring.c b/fs/io_uring.c > index 21738ed7521e..f224f8df77a1 100644 > --- a/fs/io_uring.c > +++ b/fs/io_uring.c > @@ -2225,6 +2225,49 @@ static void ctx_flush_and_put(struct io_ring_ctx *ctx, bool *locked) > percpu_ref_put(&ctx->refs); > } > > +static inline void ctx_commit_and_unlock(struct io_ring_ctx *ctx) > +{ > + io_commit_cqring(ctx); > + spin_unlock(&ctx->completion_lock); > + io_cqring_ev_posted(ctx); > +} > + > +static void handle_prior_tw_list(struct io_wq_work_node *node, struct io_ring_ctx **ctx, > + bool *uring_locked, bool *compl_locked) > +{ Please wrap at 80 lines. And let's name this one 'handle_prev_tw_list' instead. -- Jens Axboe