On 3/10/20 9:04 AM, Jens Axboe wrote: > +static int io_provide_buffers(struct io_kiocb *req, bool force_nonblock) > +{ > + struct io_provide_buf *p = &req->pbuf; > + struct io_ring_ctx *ctx = req->ctx; > + struct io_buffer *head, *list; > + int ret = 0; > + > + /* > + * "Normal" inline submissions always hold the uring_lock, since we > + * grab it from the system call. Same is true for the SQPOLL offload. > + * The only exception is when we've detached the request and issue it > + * from an async worker thread, grab the lock for that case. > + */ > + if (!force_nonblock) > + mutex_lock(&ctx->uring_lock); I mistakenly introduced io_ring_submit_lock() in patch 3 instead of this one, I've corrected this mistake in the git branch. -- Jens Axboe