6.1-stable review patch. If anyone has any objections, please let me know. ------------------ From: Hagar Hemdan <hagarhem@xxxxxxxxxx> commit 73254a297c2dd094abec7c9efee32455ae875bdf upstream. The io_register_iowq_max_workers() function calls io_put_sq_data(), which acquires the sqd->lock without releasing the uring_lock. Similar to the commit 009ad9f0c6ee ("io_uring: drop ctx->uring_lock before acquiring sqd->lock"), this can lead to a potential deadlock situation. To resolve this issue, the uring_lock is released before calling io_put_sq_data(), and then it is re-acquired after the function call. This change ensures that the locks are acquired in the correct order, preventing the possibility of a deadlock. Suggested-by: Maximilian Heyne <mheyne@xxxxxxxxx> Signed-off-by: Hagar Hemdan <hagarhem@xxxxxxxxxx> Link: https://lore.kernel.org/r/20240604130527.3597-1-hagarhem@xxxxxxxxxx Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- io_uring/io_uring.c | 5 +++++ 1 file changed, 5 insertions(+) --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -3921,8 +3921,10 @@ static __cold int io_register_iowq_max_w } if (sqd) { + mutex_unlock(&ctx->uring_lock); mutex_unlock(&sqd->lock); io_put_sq_data(sqd); + mutex_lock(&ctx->uring_lock); } if (copy_to_user(arg, new_count, sizeof(new_count))) @@ -3947,8 +3949,11 @@ static __cold int io_register_iowq_max_w return 0; err: if (sqd) { + mutex_unlock(&ctx->uring_lock); mutex_unlock(&sqd->lock); io_put_sq_data(sqd); + mutex_lock(&ctx->uring_lock); + } return ret; }