Re: [PATCH 2/6] io_uring: add IORING_OP_PROVIDE_BUFFERS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/29/20 4:36 AM, Pavel Begunkov wrote:
> On 2/29/2020 7:50 AM, Jens Axboe wrote:
>> On 2/28/20 5:43 PM, Pavel Begunkov wrote:
>>>> +static int io_provide_buffers(struct io_kiocb *req, struct io_kiocb **nxt,
>>>> +			      bool force_nonblock)
>>>> +{
>>>> +	struct io_provide_buf *p = &req->pbuf;
>>>> +	struct io_ring_ctx *ctx = req->ctx;
>>>> +	struct list_head *list;
>>>> +	int ret = 0;
>>>> +
>>>> +	/*
>>>> +	 * "Normal" inline submissions always hold the uring_lock, since we
>>>> +	 * grab it from the system call. Same is true for the SQPOLL offload.
>>>> +	 * The only exception is when we've detached the request and issue it
>>>> +	 * from an async worker thread, grab the lock for that case.
>>>> +	 */
>>>> +	if (!force_nonblock)
>>>> +		mutex_lock(&ctx->uring_lock);
>>>
>>> io_poll_task_handler() calls it with force_nonblock==true, but it
>>> doesn't hold the mutex AFAIK.
>>
>> True, that's the only exception. And that command doesn't transfer data
>> so would never need a buffer, but I agree that's perhaps not fully
>> clear. The async task handler grabs the mutex.
> 
> Hmm, I meant io_poll_task_func(), which do __io_queue_sqe() for @nxt
> request, which in its turn calls io_issue_sqe(force_nonblock=true).
> 
> Does io_poll_task_func() hold @uring_mutex? Otherwise, if @nxt happened
> to be io_provide_buffers(), we get there without holding the mutex and
> with force_nonblock=true.

Ah yes, good point! Not sure why I missed that... I've added the locking
around __io_queue_sqe() if 'nxt' is set in the handler.

-- 
Jens Axboe




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux