On 2/29/20 10:34 AM, Jens Axboe wrote: >>> +static int io_provide_buffers(struct io_kiocb *req, struct io_kiocb **nxt, >>> + bool force_nonblock) >>> +{ >>> + struct io_provide_buf *p = &req->pbuf; >>> + struct io_ring_ctx *ctx = req->ctx; >>> + struct list_head *list; >>> + int ret = 0; >>> + >>> + /* >>> + * "Normal" inline submissions always hold the uring_lock, since we >>> + * grab it from the system call. Same is true for the SQPOLL offload. >>> + * The only exception is when we've detached the request and issue it >>> + * from an async worker thread, grab the lock for that case. >>> + */ >>> + if (!force_nonblock) >>> + mutex_lock(&ctx->uring_lock); >>> + >>> + lockdep_assert_held(&ctx->uring_lock); >>> + >>> + list = idr_find(&ctx->io_buffer_idr, p->gid); >>> + if (!list) { >>> + list = kmalloc(sizeof(*list), GFP_KERNEL); >> >> Could be easier to hook struct io_buffer into idr directly, i.e. without >> a separate allocated list-head entry. > > Good point, we can just make the first kbuf the list, point to the next > one (or NULL) when a kbuf is removed. I'll make that change, gets rid of > the list alloc. I took a look at this, and it does come with tradeoffs. The nice thing about the list is that it provides a constant lookup pointer, whereas if we just use the kbuf as the idr index and hang other buffers off that, then we at least need an idr_replace() or similar when the list becomes empty. And we need to grab buffers from the tail to retain the head kbuf (which is idr indexed) for as long as possible. The latter isn't really a tradeoff, it's just list management. I do still think it'll end up nicer though, I'll go ahead and give it a whirl. -- Jens Axboe