Move io_put_req() in io_poll_task_handler() from under spinlock. That's a good rule to minimise time within spinlock sections, and performance-wise it should affect only rare cases/slow-path. Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx> --- fs/io_uring.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 3ccc7939d863..ca1cff579873 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -4857,10 +4857,9 @@ static void io_poll_task_handler(struct io_kiocb *req, struct io_kiocb **nxt) hash_del(&req->hash_node); io_poll_complete(req, req->result, 0); - req->flags |= REQ_F_COMP_LOCKED; - *nxt = io_put_req_find_next(req); spin_unlock_irq(&ctx->completion_lock); + *nxt = io_put_req_find_next(req); io_cqring_ev_posted(ctx); } -- 2.24.0