Patch "io_uring: check if we need to reschedule during overflow flush" has been added to the 6.11-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    io_uring: check if we need to reschedule during overflow flush

to the 6.11-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     io_uring-check-if-we-need-to-reschedule-during-overf.patch
and it can be found in the queue-6.11 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 7e11e5ef025c9aa5a0b8b08ce654920e6d605c6c
Author: Jens Axboe <axboe@xxxxxxxxx>
Date:   Fri Sep 20 02:51:20 2024 -0600

    io_uring: check if we need to reschedule during overflow flush
    
    [ Upstream commit eac2ca2d682f94f46b1973bdf5e77d85d77b8e53 ]
    
    In terms of normal application usage, this list will always be empty.
    And if an application does overflow a bit, it'll have a few entries.
    However, nothing obviously prevents syzbot from running a test case
    that generates a ton of overflow entries, and then flushing them can
    take quite a while.
    
    Check for needing to reschedule while flushing, and drop our locks and
    do so if necessary. There's no state to maintain here as overflows
    always prune from head-of-list, hence it's fine to drop and reacquire
    the locks at the end of the loop.
    
    Link: https://lore.kernel.org/io-uring/66ed061d.050a0220.29194.0053.GAE@xxxxxxxxxx/
    Reported-by: syzbot+5fca234bd7eb378ff78e@xxxxxxxxxxxxxxxxxxxxxxxxx
    Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 7a166120a45c3..7057d942fb2b0 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -627,6 +627,21 @@ static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool dying)
 		}
 		list_del(&ocqe->list);
 		kfree(ocqe);
+
+		/*
+		 * For silly syzbot cases that deliberately overflow by huge
+		 * amounts, check if we need to resched and drop and
+		 * reacquire the locks if so. Nothing real would ever hit this.
+		 * Ideally we'd have a non-posting unlock for this, but hard
+		 * to care for a non-real case.
+		 */
+		if (need_resched()) {
+			io_cq_unlock_post(ctx);
+			mutex_unlock(&ctx->uring_lock);
+			cond_resched();
+			mutex_lock(&ctx->uring_lock);
+			io_cq_lock(ctx);
+		}
 	}
 
 	if (list_empty(&ctx->cq_overflow_list)) {




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux