Re: [PATCH] Subject: io_uring: Fix bug in io_fallback_req_func that can cause deadlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/12/23 3:56?AM, luhongfei wrote:
> There was a bug in io_fallback_req_func that can cause deadlocks
> because uring_lock was not released when return.
> This patch releases the uring_lock before return.
> 
> Signed-off-by: luhongfei <luhongfei@xxxxxxxx>
> ---
>  io_uring/io_uring.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>  mode change 100644 => 100755 io_uring/io_uring.c
> 
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index 3bca7a79efda..1af793c7b3da
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -252,8 +252,10 @@ static __cold void io_fallback_req_func(struct work_struct *work)
>  	mutex_lock(&ctx->uring_lock);
>  	llist_for_each_entry_safe(req, tmp, node, io_task_work.node)
>  		req->io_task_work.func(req, &ts);
> -	if (WARN_ON_ONCE(!ts.locked))
> +	if (WARN_ON_ONCE(!ts.locked)) {
> +		mutex_unlock(&ctx->uring_lock);
>  		return;
> +	}
>  	io_submit_flush_completions(ctx);
>  	mutex_unlock(&ctx->uring_lock);
>  }

I'm guessing you found this by reading the code, and didn't actually hit
it? Because it looks fine as-is. We lock the ctx->uring_lock, and set
ts.locked == true. If ts.locked is false, then someone unlocked the ring
further down, which is unexpected (hence the WARN_ON_ONCE()). But if
that did happen, then we definitely don't want to unlock it again.

Because of that, I don't think you're patch is correct.

-- 
Jens Axboe




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux