On 2/28/2020 11:30 PM, Jens Axboe wrote: > +static int io_rw_common_cflags(struct io_kiocb *req) Sounds more like sort of const/idempotent function, but not one changing internal state (i.e. deallocation kbuf). Could it be named closer to deallocate, remove, disarm, etc? > +{ > + struct io_buffer *kbuf = (struct io_buffer *) req->rw.addr; > + int cflags; > + > + cflags = kbuf->bid << IORING_CQE_BUFFER_SHIFT; > + cflags |= IORING_CQE_F_BUFFER; > + req->rw.addr = 0; > + kfree(kbuf); > + return cflags; > +} > +static struct io_buffer *io_buffer_select(struct io_kiocb *req, int gid, > + struct io_buffer *kbuf, > + bool needs_lock) > +{ > + struct list_head *list; > + > + if (req->flags & REQ_F_BUFFER_SELECTED) > + return kbuf; > + > + /* > + * "Normal" inline submissions always hold the uring_lock, since we > + * grab it from the system call. Same is true for the SQPOLL offload. > + * The only exception is when we've detached the request and issue it > + * from an async worker thread, grab the lock for that case. > + */ > + if (needs_lock) > + mutex_lock(&req->ctx->uring_lock); The same concern as for the [2/6] > + > + lockdep_assert_held(&req->ctx->uring_lock); > + > + list = idr_find(&req->ctx->io_buffer_idr, gid); > + if (list && !list_empty(list)) { > + kbuf = list_first_entry(list, struct io_buffer, list); > + list_del(&kbuf->list); free(list), if it became empty? As mentioned, may go away naturally if idr would store io_buffer directly. > + } else { > + kbuf = ERR_PTR(-ENOBUFS); > + } > + > + if (needs_lock) > + mutex_unlock(&req->ctx->uring_lock); > + > + return kbuf; > +} > + -- Pavel Begunkov