io_issue_sqe() calls io_iopoll_req_issued() which manipulates poll_list, so acquire ctx->uring_lock beforehand similar to other instances of calling io_issue_sqe(). Signed-off-by: Bijan Mottahedeh <bijan.mottahedeh@xxxxxxxxxx> --- fs/io_uring.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/fs/io_uring.c b/fs/io_uring.c index d015ce8..7b399e2 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -4359,7 +4359,9 @@ static void io_wq_submit_work(struct io_wq_work **workptr) req->has_user = (work->flags & IO_WQ_WORK_HAS_MM) != 0; req->in_async = true; do { + mutex_lock(&req->ctx->uring_lock); ret = io_issue_sqe(req, NULL, &nxt, false); + mutex_unlock(&req->ctx->uring_lock); /* * We can get EAGAIN for polled IO even though we're * forcing a sync submission from here, since we can't -- 1.8.3.1