This is a note to let you know that I've just added the patch titled io_uring: add reschedule point to handle_tw_list() to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: io_uring-add-reschedule-point-to-handle_tw_list.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From f58680085478dd292435727210122960d38e8014 Mon Sep 17 00:00:00 2001 From: Jens Axboe <axboe@xxxxxxxxx> Date: Fri, 27 Jan 2023 09:50:31 -0700 Subject: io_uring: add reschedule point to handle_tw_list() From: Jens Axboe <axboe@xxxxxxxxx> commit f58680085478dd292435727210122960d38e8014 upstream. If CONFIG_PREEMPT_NONE is set and the task_work chains are long, we could be running into issues blocking others for too long. Add a reschedule check in handle_tw_list(), and flush the ctx if we need to reschedule. Cc: stable@xxxxxxxxxxxxxxx # 5.10+ Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- io_uring/io_uring.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1030,10 +1030,16 @@ static unsigned int handle_tw_list(struc /* if not contended, grab and improve batching */ *locked = mutex_trylock(&(*ctx)->uring_lock); percpu_ref_get(&(*ctx)->refs); - } + } else if (!*locked) + *locked = mutex_trylock(&(*ctx)->uring_lock); req->io_task_work.func(req, locked); node = next; count++; + if (unlikely(need_resched())) { + ctx_flush_and_put(*ctx, locked); + *ctx = NULL; + cond_resched(); + } } return count; Patches currently in stable-queue which might be from axboe@xxxxxxxxx are queue-6.1/sbitmap-use-single-per-bitmap-counting-to-wake-up-qu.patch queue-6.1/io_uring-handle-tif_notify_resume-when-checking-for-task_work.patch queue-6.1/block-don-t-allow-multiple-bios-for-iocb_nowait-issue.patch queue-6.1/sbitmap-correct-wake_batch-recalculation-to-avoid-po.patch queue-6.1/blk-mq-avoid-sleep-in-blk_mq_alloc_request_hctx.patch queue-6.1/io_uring-add-reschedule-point-to-handle_tw_list.patch queue-6.1/ublk_drv-remove-nr_aborted_queues-from-ublk_device.patch queue-6.1/io_uring-remove-msg_nosignal-from-recvmsg.patch queue-6.1/blk-mq-fix-potential-io-hung-for-shared-sbitmap-per-.patch queue-6.1/blk-mq-wait-on-correct-sbitmap_queue-in-blk_mq_mark_.patch queue-6.1/block-clear-bio-bi_bdev-when-putting-a-bio-back-in-the-cache.patch queue-6.1/io_uring-fix-fget-leak-when-fs-don-t-support-nowait-buffered-read.patch queue-6.1/ublk_drv-don-t-probe-partitions-if-the-ubq-daemon-is.patch queue-6.1/trace-blktrace-fix-memory-leak-with-using-debugfs_lo.patch queue-6.1/io_uring-rsrc-disallow-multi-source-reg-buffers.patch queue-6.1/x86-fpu-don-t-set-tif_need_fpu_load-for-pf_io_worker.patch queue-6.1/io_uring-replace-0-length-array-with-flexible-array.patch queue-6.1/blk-cgroup-dropping-parent-refcount-after-pd_free_fn.patch queue-6.1/block-be-a-bit-more-careful-in-checking-for-null-bdev-while-polling.patch queue-6.1/block-use-proper-return-value-from-bio_failfast.patch queue-6.1/block-fix-io-statistics-for-cgroup-in-throttle-path.patch queue-6.1/block-ublk-check-io-buffer-based-on-flag-need_get_da.patch queue-6.1/io_uring-use-user-visible-tail-in-io_uring_poll.patch queue-6.1/blk-cgroup-synchronize-pd_free_fn-from-blkg_free_wor.patch queue-6.1/sbitmap-remove-redundant-check-in-__sbitmap_queue_ge.patch queue-6.1/block-sync-mixed-merged-request-s-failfast-with-1st-.patch queue-6.1/blk-mq-remove-stale-comment-for-blk_mq_sched_mark_re.patch queue-6.1/blk-iocost-fix-divide-by-0-error-in-calc_lcoefs.patch queue-6.1/s390-dasd-fix-potential-memleak-in-dasd_eckd_init.patch queue-6.1/blk-mq-correct-stale-comment-of-.get_budget.patch queue-6.1/io_uring-add-a-conditional-reschedule-to-the-iopoll-cancelation-loop.patch queue-6.1/block-bio-integrity-copy-flags-when-bio_integrity_pa.patch