Re: [PATCH 1/3] io_uring: add remote task_work execution helper

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/1/24 11:30 AM, David Wei wrote:
> On 2024-03-29 13:09, Jens Axboe wrote:
>> All our task_work handling is targeted at the state in the io_kiocb
>> itself, which is what it is being used for. However, MSG_RING rolls its
>> own task_work handling, ignoring how that is usually done.
>>
>> In preparation for switching MSG_RING to be able to use the normal
>> task_work handling, add io_req_task_work_add_remote() which allows the
>> caller to pass in the target io_ring_ctx.
>>
>> Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
>> ---
>>  io_uring/io_uring.c | 30 ++++++++++++++++++++++--------
>>  io_uring/io_uring.h |  2 ++
>>  2 files changed, 24 insertions(+), 8 deletions(-)
>>
>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
>> index fddaefb9cbff..a311a244914b 100644
>> --- a/io_uring/io_uring.c
>> +++ b/io_uring/io_uring.c
>> @@ -1232,9 +1232,10 @@ void tctx_task_work(struct callback_head *cb)
>>  	WARN_ON_ONCE(ret);
>>  }
>>  
>> -static inline void io_req_local_work_add(struct io_kiocb *req, unsigned flags)
>> +static inline void io_req_local_work_add(struct io_kiocb *req,
>> +					 struct io_ring_ctx *ctx,
>> +					 unsigned flags)
>>  {
>> -	struct io_ring_ctx *ctx = req->ctx;
>>  	unsigned nr_wait, nr_tw, nr_tw_prev;
>>  	struct llist_node *head;
>>  
>> @@ -1300,9 +1301,10 @@ static inline void io_req_local_work_add(struct io_kiocb *req, unsigned flags)
>>  	wake_up_state(ctx->submitter_task, TASK_INTERRUPTIBLE);
>>  }
>>  
>> -static void io_req_normal_work_add(struct io_kiocb *req)
>> +static void io_req_normal_work_add(struct io_kiocb *req,
>> +				   struct task_struct *task)
>>  {
>> -	struct io_uring_task *tctx = req->task->io_uring;
>> +	struct io_uring_task *tctx = task->io_uring;
>>  	struct io_ring_ctx *ctx = req->ctx;
>>  
>>  	/* task_work already pending, we're done */
>> @@ -1321,7 +1323,7 @@ static void io_req_normal_work_add(struct io_kiocb *req)
>>  		return;
>>  	}
>>  
>> -	if (likely(!task_work_add(req->task, &tctx->task_work, ctx->notify_method)))
>> +	if (likely(!task_work_add(task, &tctx->task_work, ctx->notify_method)))
>>  		return;
>>  
>>  	io_fallback_tw(tctx, false);
>> @@ -1331,10 +1333,22 @@ void __io_req_task_work_add(struct io_kiocb *req, unsigned flags)
>>  {
>>  	if (req->ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
>>  		rcu_read_lock();
>> -		io_req_local_work_add(req, flags);
>> +		io_req_local_work_add(req, req->ctx, flags);
>> +		rcu_read_unlock();
>> +	} else {
>> +		io_req_normal_work_add(req, req->task);
> 
> Why does this not require a READ_ONCE() like
> io_req_task_work_add_remote()?

One is the req->task side, which is serialized as it's setup at install
time. For the ctx submitter_task, in _theory_ that isn't, hence
read/write once must be used for those. In practice, I don't think the
latter solves anything outside of perhaps KCSAN complaining.

-- 
Jens Axboe





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux