Re: [PATCH 6.4 800/800] io_uring: Use io_schedule* in cqring wait

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello.

On neděle 16. července 2023 21:50:53 CEST Greg Kroah-Hartman wrote:
> From: Andres Freund <andres@xxxxxxxxxxx>
> 
> commit 8a796565cec3601071cbbd27d6304e202019d014 upstream.
> 
> I observed poor performance of io_uring compared to synchronous IO. That
> turns out to be caused by deeper CPU idle states entered with io_uring,
> due to io_uring using plain schedule(), whereas synchronous IO uses
> io_schedule().
> 
> The losses due to this are substantial. On my cascade lake workstation,
> t/io_uring from the fio repository e.g. yields regressions between 20%
> and 40% with the following command:
> ./t/io_uring -r 5 -X0 -d 1 -s 1 -c 1 -p 0 -S$use_sync -R 0 /mnt/t2/fio/write.0.0
> 
> This is repeatable with different filesystems, using raw block devices
> and using different block devices.
> 
> Use io_schedule_prepare() / io_schedule_finish() in
> io_cqring_wait_schedule() to address the difference.
> 
> After that using io_uring is on par or surpassing synchronous IO (using
> registered files etc makes it reliably win, but arguably is a less fair
> comparison).
> 
> There are other calls to schedule() in io_uring/, but none immediately
> jump out to be similarly situated, so I did not touch them. Similarly,
> it's possible that mutex_lock_io() should be used, but it's not clear if
> there are cases where that matters.
> 
> Cc: stable@xxxxxxxxxxxxxxx # 5.10+
> Cc: Pavel Begunkov <asml.silence@xxxxxxxxx>
> Cc: io-uring@xxxxxxxxxxxxxxx
> Cc: linux-kernel@xxxxxxxxxxxxxxx
> Signed-off-by: Andres Freund <andres@xxxxxxxxxxx>
> Link: https://lore.kernel.org/r/20230707162007.194068-1-andres@xxxxxxxxxxx
> [axboe: minor style fixup]
> Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
> ---
>  io_uring/io_uring.c |   15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -2575,6 +2575,8 @@ int io_run_task_work_sig(struct io_ring_
>  static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
>  					  struct io_wait_queue *iowq)
>  {
> +	int token, ret;
> +
>  	if (unlikely(READ_ONCE(ctx->check_cq)))
>  		return 1;
>  	if (unlikely(!llist_empty(&ctx->work_llist)))
> @@ -2585,11 +2587,20 @@ static inline int io_cqring_wait_schedul
>  		return -EINTR;
>  	if (unlikely(io_should_wake(iowq)))
>  		return 0;
> +
> +	/*
> +	 * Use io_schedule_prepare/finish, so cpufreq can take into account
> +	 * that the task is waiting for IO - turns out to be important for low
> +	 * QD IO.
> +	 */
> +	token = io_schedule_prepare();
> +	ret = 0;
>  	if (iowq->timeout == KTIME_MAX)
>  		schedule();
>  	else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS))
> -		return -ETIME;
> -	return 0;
> +		ret = -ETIME;
> +	io_schedule_finish(token);
> +	return ret;
>  }
>  
>  /*

Reportedly, this caused a regression as reported in [1] [2] [3]. Not only v6.4.4 is affected, v6.1.39 is affected too.

Reverting this commit fixes the issue.

Please check.

Thanks.

[1] https://bbs.archlinux.org/viewtopic.php?id=287343
[2] https://bugzilla.kernel.org/show_bug.cgi?id=217700
[3] https://bugzilla.kernel.org/show_bug.cgi?id=217699

-- 
Oleksandr Natalenko (post-factum)

Attachment: signature.asc
Description: This is a digitally signed message part.


[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux