[PATCH v2 0/3] newidle_balance() PREEMPT_RT latency mitigations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



These patches mitigate latency caused by newidle_balance() on large
systems when PREEMPT_RT is enabled, by enabling interrupts when the lock
is dropped, and exiting early at various points if an RT task is runnable
on the current CPU.

On a system with 128 CPUs, these patches dropped latency (as measured by
a 12 hour rteval run) from 1045us to 317us (when applied to
5.12.0-rc3-rt3).

I tried a couple scheduler benchmarks (perf bench sched pipe, and
sysbench threads) to try to determine whether the overhead is measurable
on non-RT, but the results varied widely enough (with or without the patches)
that I couldn't draw any conclusions from them.  So at least for now, I
limited the balance callback change to when PREEMPT_RT is enabled.

Link to v1 RFC patches:
https://lore.kernel.org/lkml/20200428050242.17717-1-swood@xxxxxxxxxx/

Scott Wood (3):
  sched/fair: Call newidle_balance() from balance_callback on PREEMPT_RT
  sched/fair: Enable interrupts when dropping lock in newidle_balance()
  sched/fair: break out of newidle balancing if an RT task appears

 kernel/sched/fair.c  | 66 ++++++++++++++++++++++++++++++++++++++------
 kernel/sched/sched.h |  6 ++++
 2 files changed, 64 insertions(+), 8 deletions(-)

-- 
2.27.0




[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux