Re: [PATCH v3 1/3] io_uring: refactor event check out of __io_async_wake()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 25-10-2021 11:08, Xiaoguang Wang wrote:
> Which is a preparation for following patch, and here try to inline
> __io_async_wake(), which is simple and can save a function call.
> 
> Signed-off-by: Xiaoguang Wang <xiaoguang.wang@xxxxxxxxxxxxxxxxx>
> ---
>  fs/io_uring.c | 20 +++++++++++++-------
>  1 file changed, 13 insertions(+), 7 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 736d456e7913..18af9bb9a4bc 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -5228,13 +5228,9 @@ struct io_poll_table {
>  	int error;
>  };
>  
> -static int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
> +static inline int __io_async_wake(struct io_kiocb *req, struct io_poll_iocb *poll,
>  			   __poll_t mask, io_req_tw_func_t func)
>  {
> -	/* for instances that support it check for an event match first: */
> -	if (mask && !(mask & poll->events))
> -		return 0;
> -

Is it possible to keep this check as it is, and make the __io_async_wake function inline ONLY ?
As I can see, the callers doing the same checks at different places ?
Also, there could be a possibility that, this check may get missed in new caller APIs introduced in future.

>  	trace_io_uring_task_add(req->ctx, req->opcode, req->user_data, mask);
>  
>  	list_del_init(&poll->wait.entry);
> @@ -5508,11 +5504,16 @@ static int io_async_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
>  {
>  	struct io_kiocb *req = wait->private;
>  	struct io_poll_iocb *poll = &req->apoll->poll;
> +	__poll_t mask = key_to_poll(key);
>  
>  	trace_io_uring_poll_wake(req->ctx, req->opcode, req->user_data,
>  					key_to_poll(key));
>  
> -	return __io_async_wake(req, poll, key_to_poll(key), io_async_task_func);
> +	/* for instances that support it check for an event match first: */
> +	if (mask && !(mask & poll->events))
> +		return 0;
> +
> +	return __io_async_wake(req, poll, mask, io_async_task_func);
>  }
>  
>  static void io_poll_req_insert(struct io_kiocb *req)
> @@ -5772,8 +5773,13 @@ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
>  {
>  	struct io_kiocb *req = wait->private;
>  	struct io_poll_iocb *poll = &req->poll;
> +	__poll_t mask = key_to_poll(key);
> +
> +	/* for instances that support it check for an event match first: */
> +	if (mask && !(mask & poll->events))
> +		return 0;
>  
> -	return __io_async_wake(req, poll, key_to_poll(key), io_poll_task_func);
> +	return __io_async_wake(req, poll, mask, io_poll_task_func);
>  }
>  
>  static void io_poll_queue_proc(struct file *file, struct wait_queue_head *head,
> 

Regards,

~Praveen.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux