Patch "sched/fair: Check idle_cpu() before need_resched() to detect ilb CPU turning busy" has been added to the 6.12-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    sched/fair: Check idle_cpu() before need_resched() to detect ilb CPU turning busy

to the 6.12-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     sched-fair-check-idle_cpu-before-need_resched-to-det.patch
and it can be found in the queue-6.12 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit c847cd589507d709a386802f9fa52e36a1726972
Author: K Prateek Nayak <kprateek.nayak@xxxxxxx>
Date:   Tue Nov 19 05:44:31 2024 +0000

    sched/fair: Check idle_cpu() before need_resched() to detect ilb CPU turning busy
    
    [ Upstream commit ff47a0acfcce309cf9e175149c75614491953c8f ]
    
    Commit b2a02fc43a1f ("smp: Optimize send_call_function_single_ipi()")
    optimizes IPIs to idle CPUs in TIF_POLLING_NRFLAG mode by setting the
    TIF_NEED_RESCHED flag in idle task's thread info and relying on
    flush_smp_call_function_queue() in idle exit path to run the
    call-function. A softirq raised by the call-function is handled shortly
    after in do_softirq_post_smp_call_flush() but the TIF_NEED_RESCHED flag
    remains set and is only cleared later when schedule_idle() calls
    __schedule().
    
    need_resched() check in _nohz_idle_balance() exists to bail out of load
    balancing if another task has woken up on the CPU currently in-charge of
    idle load balancing which is being processed in SCHED_SOFTIRQ context.
    Since the optimization mentioned above overloads the interpretation of
    TIF_NEED_RESCHED, check for idle_cpu() before going with the existing
    need_resched() check which can catch a genuine task wakeup on an idle
    CPU processing SCHED_SOFTIRQ from do_softirq_post_smp_call_flush(), as
    well as the case where ksoftirqd needs to be preempted as a result of
    new task wakeup or slice expiry.
    
    In case of PREEMPT_RT or threadirqs, although the idle load balancing
    may be inhibited in some cases on the ilb CPU, the fact that ksoftirqd
    is the only fair task going back to sleep will trigger a newidle balance
    on the CPU which will alleviate some imbalance if it exists if idle
    balance fails to do so.
    
    Fixes: b2a02fc43a1f ("smp: Optimize send_call_function_single_ipi()")
    Signed-off-by: K Prateek Nayak <kprateek.nayak@xxxxxxx>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
    Link: https://lore.kernel.org/r/20241119054432.6405-4-kprateek.nayak@xxxxxxx
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 49fa456cd3320..782ce70ebd1b0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -12580,7 +12580,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
 		 * work being done for other CPUs. Next load
 		 * balancing owner will pick it up.
 		 */
-		if (need_resched()) {
+		if (!idle_cpu(this_cpu) && need_resched()) {
 			if (flags & NOHZ_STATS_KICK)
 				has_blocked_load = true;
 			if (flags & NOHZ_NEXT_KICK)




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux