This is a note to let you know that I've just added the patch titled io_uring: always grab lock in io_cancel_async_work() to the 5.4-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: io_uring-always-grab-lock-in-io_cancel_async_work.patch and it can be found in the queue-5.4 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 42a9b5f649124761a4ffd260d267295056eea113 Mon Sep 17 00:00:00 2001 From: Jens Axboe <axboe@xxxxxxxxx> Date: Tue, 23 May 2023 08:23:32 -0600 Subject: io_uring: always grab lock in io_cancel_async_work() From: Jens Axboe <axboe@xxxxxxxxx> No upstream commit exists for this patch. It's not necessarily safe to check the task_list locklessly, remove this micro optimization and always grab task_lock before deeming it empty. Reported-and-tested-by: Lee Jones <lee@xxxxxxxxxx> Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- fs/io_uring.c | 3 --- 1 file changed, 3 deletions(-) --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -3738,9 +3738,6 @@ static void io_cancel_async_work(struct { struct io_kiocb *req; - if (list_empty(&ctx->task_list)) - return; - spin_lock_irq(&ctx->task_lock); list_for_each_entry(req, &ctx->task_list, task_list) { Patches currently in stable-queue which might be from axboe@xxxxxxxxx are queue-5.4/io_uring-don-t-drop-completion-lock-before-timer-is-fully-initialized.patch queue-5.4/io_uring-have-io_kill_timeout-honor-the-request-references.patch queue-5.4/io_uring-always-grab-lock-in-io_cancel_async_work.patch