Re: [PATCH 3/3] io_uring: briefly loose locks while reaping events

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/07/2020 17:31, Jens Axboe wrote:
> On 7/6/20 8:14 AM, Pavel Begunkov wrote:
>> It's not nice to hold @uring_lock for too long io_iopoll_reap_events().
>> For instance, the lock is needed to publish requests to @poll_list, and
>> that locks out tasks doing that for no good reason. Loose it
>> occasionally.
>>
>> Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx>
>> ---
>>  fs/io_uring.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index 020944a193d0..568e25bcadd6 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -2059,11 +2059,14 @@ static void io_iopoll_reap_events(struct io_ring_ctx *ctx)
>>  
>>  		io_iopoll_getevents(ctx, &nr_events, 1);
>>  
>> +		/* task_work takes the lock to publish a req to @poll_list */
>> +		mutex_unlock(&ctx->uring_lock);
>>  		/*
>>  		 * Ensure we allow local-to-the-cpu processing to take place,
>>  		 * in this case we need to ensure that we reap all events.
>>  		 */
>>  		cond_resched();
>> +		mutex_lock(&ctx->uring_lock);
>>  	}
>>  	mutex_unlock(&ctx->uring_lock);
>>  }
> 
> This would be much better written as:
> 
> if (need_resched()) {
> 	mutex_unlock(&ctx->uring_lock);
> 	cond_resched();
> 	mutex_lock(&ctx->uring_lock);
> }
> 
> to avoid shuffling the lock when not needed to. Every cycle counts for
> polled IO.

It happens only when io_uring is being killed, can't imagine any sane app
trying to catch CQEs after doing that. I'll resend

-- 
Pavel Begunkov



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux