Re: [PATCH] io_uring/napi: remove duplicate io_napi_entry timeout assignation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/12/24 3:45 PM, Olivier Langlois wrote:
> On Mon, 2024-08-12 at 14:40 -0600, Jens Axboe wrote:
>>
>> @@ -174,9 +174,8 @@ static void io_napi_blocking_busy_loop(struct
>> io_ring_ctx *ctx,
>>  	do {
>>  		is_stale = __io_napi_do_busy_loop(ctx,
>> loop_end_arg);
>>  	} while (!io_napi_busy_loop_should_end(iowq, start_time) &&
>> !loop_end_arg);
>> -	rcu_read_unlock();
>> -
>>  	io_napi_remove_stale(ctx, is_stale);
>> +	rcu_read_unlock();
>>  }
>>
>> @@ -309,9 +309,8 @@ int io_napi_sqpoll_busy_poll(struct io_ring_ctx
>> *ctx)
>>  
>>  	rcu_read_lock();
>>  	is_stale = __io_napi_do_busy_loop(ctx, NULL);
>> -	rcu_read_unlock();
>> -
>>  	io_napi_remove_stale(ctx, is_stale);
>> +	rcu_read_unlock();
>>  	return 1;
>>  }
>>  
>>
> Jens,
> 
> I have big doubts that moving the rcu_read_unlock() call is correct.
> The read-only list access if performed by the busy loops block.
> 
> io_napi_remove_stale() is then modifying the list after having acquired
> the spinlock. IMHO, you should not hold the RCU read lock when you are
> updating the data. I even wonder is this could not be a possible
> livelock cause...

You can certainly nest a spinlock inside the rcu read side section,
that's not an issue. Only thing that matters here is the list
modification is done inside the lock that protects it, the rcu is just
for reader side protection (and to make the reader side cheaper). The
spinlock itself is reader side RCU protection as well, so the change
isn't strictly needed to begin with.

-- 
Jens Axboe





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux