Fix a bug in dynamic preemption where the kernel will yield contended spinlocks (and rwlocks) even if the selected preemption model is "none" or "voluntary". I say "bug" because this divergence from PREEMPT_DYNAMIC=n behavior effectively broke existing KVM configurations, e.g. vCPUs would get stuck and become unresponsive for multiple seconds if there was heavy KSM or NUMA balancing activity in the host. This isn't super urgent, as 6.8 has a fix in KVM for the over-aggressive yielding (commit d02c357e5bfa ("KVM: x86/mmu: Retry fault before acquiring mmu_lock if mapping is changing"), but I wouldn't be surprised if the behavior is causing other performance issues/regressions that are less severe and/or less visible. v2: - Rebase onto Linus' tree to deal with the code movement to spinlock.h. - Opportunistically document the behavior. - Add the PREEMPT_AUTO folks to Cc to get their eyeballs/input. v1: https://lore.kernel.org/all/20240110214723.695930-1-seanjc@xxxxxxxxxx Sean Christopherson (2): sched/core: Move preempt_model_*() helpers from sched.h to preempt.h sched/core: Drop spinlocks on contention iff kernel is preemptible .../admin-guide/kernel-parameters.txt | 4 +- include/linux/preempt.h | 41 +++++++++++++++++++ include/linux/sched.h | 41 ------------------- include/linux/spinlock.h | 14 +++---- 4 files changed, 50 insertions(+), 50 deletions(-) base-commit: b29f377119f68b942369a9366bdcb1fec82b2cda -- 2.44.0.278.ge034bb2e1d-goog