Re: [PATCH 2/6] io_uring: add IORING_OP_PROVIDE_BUFFERS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/29/20 5:08 AM, Pavel Begunkov wrote:
> On 2/28/2020 11:30 PM, Jens Axboe wrote:
>> IORING_OP_PROVIDE_BUFFERS uses the buffer registration infrastructure to
>> support passing in an addr/len that is associated with a buffer ID and
>> buffer group ID. The group ID is used to index and lookup the buffers,
>> while the buffer ID can be used to notify the application which buffer
>> in the group was used. The addr passed in is the starting buffer address,
>> and length is each buffer length. A number of buffers to add with can be
>> specified, in which case addr is incremented by length for each addition,
>> and each buffer increments the buffer ID specified.
>>
>> No validation is done of the buffer ID. If the application provides
>> buffers within the same group with identical buffer IDs, then it'll have
>> a hard time telling which buffer ID was used. The only restriction is
>> that the buffer ID can be a max of 16-bits in size, so USHRT_MAX is the
>> maximum ID that can be used.
>>
>> Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
>> ---
> 
>>  
>> +static int io_provide_buffers_prep(struct io_kiocb *req,
>> +				   const struct io_uring_sqe *sqe)
>> +{
> 
> *provide* may be confusing, at least it's for me. It's not immediately
> obvious, who gives and who consumes. Not sure what name would be better yet.

It's the application providing the buffers upfront, at least that's how
the naming makes sense to me.

>> +static int io_provide_buffers(struct io_kiocb *req, struct io_kiocb **nxt,
>> +			      bool force_nonblock)
>> +{
>> +	struct io_provide_buf *p = &req->pbuf;
>> +	struct io_ring_ctx *ctx = req->ctx;
>> +	struct list_head *list;
>> +	int ret = 0;
>> +
>> +	/*
>> +	 * "Normal" inline submissions always hold the uring_lock, since we
>> +	 * grab it from the system call. Same is true for the SQPOLL offload.
>> +	 * The only exception is when we've detached the request and issue it
>> +	 * from an async worker thread, grab the lock for that case.
>> +	 */
>> +	if (!force_nonblock)
>> +		mutex_lock(&ctx->uring_lock);
>> +
>> +	lockdep_assert_held(&ctx->uring_lock);
>> +
>> +	list = idr_find(&ctx->io_buffer_idr, p->gid);
>> +	if (!list) {
>> +		list = kmalloc(sizeof(*list), GFP_KERNEL);
> 
> Could be easier to hook struct io_buffer into idr directly, i.e. without
> a separate allocated list-head entry.

Good point, we can just make the first kbuf the list, point to the next
one (or NULL) when a kbuf is removed. I'll make that change, gets rid of
the list alloc.

-- 
Jens Axboe




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux