Hi Scott, On Thu, 29 Apr 2021 at 01:28, Scott Wood <swood@xxxxxxxxxx> wrote: > > These patches mitigate latency caused by newidle_balance() on large > systems when PREEMPT_RT is enabled, by enabling interrupts when the lock > is dropped, and exiting early at various points if an RT task is runnable > on the current CPU. > > On a system with 128 CPUs, these patches dropped latency (as measured by > a 12 hour rteval run) from 1045us to 317us (when applied to > 5.12.0-rc3-rt3). The patch below has been queued for v5.13 and removed the update of blocked load what seemed to be the major reason for long preempt/irq off during newly idle balance: https://lore.kernel.org/lkml/20210224133007.28644-1-vincent.guittot@xxxxxxxxxx/ I would be curious to see how it impacts your cases > > I tried a couple scheduler benchmarks (perf bench sched pipe, and > sysbench threads) to try to determine whether the overhead is measurable > on non-RT, but the results varied widely enough (with or without the patches) > that I couldn't draw any conclusions from them. So at least for now, I > limited the balance callback change to when PREEMPT_RT is enabled. > > Link to v1 RFC patches: > https://lore.kernel.org/lkml/20200428050242.17717-1-swood@xxxxxxxxxx/ > > Scott Wood (3): > sched/fair: Call newidle_balance() from balance_callback on PREEMPT_RT > sched/fair: Enable interrupts when dropping lock in newidle_balance() > sched/fair: break out of newidle balancing if an RT task appears > > kernel/sched/fair.c | 66 ++++++++++++++++++++++++++++++++++++++------ > kernel/sched/sched.h | 6 ++++ > 2 files changed, 64 insertions(+), 8 deletions(-) > > -- > 2.27.0 >