The missing semantic gap that occurs when a guest OS is preempted when executing its own critical section, this leads to degradation of application scalability. We try to bridge this semantic gap in some ways, by passing guest preempt_count to the host and checking guest irq disable state, the hypervisor now knows whether guest OSes are running in the critical section, the hypervisor yield-on-spin heuristics can be more smart this time to boost the vCPU candidate who is in the critical section to mitigate this preemption problem, in addition, it is more likely to be a potential lock holder. Testing on 96 HT 2 socket Xeon CLX server, with 96 vCPUs VM 100GB RAM, one VM running benchmark, the other(none-2) VMs running cpu-bound workloads, There is no performance regression for other benchmarks like Unixbench etc. 1VM vanilla optimized improved hackbench -l 50000 28 21.45 30.5% ebizzy -M 12189 12354 1.4% dbench 712 MB/sec 722 MB/sec 1.4% 2VM: vanilla optimized improved hackbench -l 10000 29.4 26 13% ebizzy -M 3834 4033 5% dbench 42.3 MB/sec 44.1 MB/sec 4.3% 3VM: vanilla optimized improved hackbench -l 10000 47 35.46 33% ebizzy -M 3828 4031 5% dbench 30.5 MB/sec 31.16 MB/sec 2.3% Wanpeng Li (5): KVM: X86: Add MSR_KVM_PREEMPT_COUNT support KVM: X86: Add guest interrupt disable state support KVM: X86: Boost vCPU which is in the critical section x86/kvm: Add MSR_KVM_PREEMPT_COUNT guest support KVM: X86: Expose PREEMT_COUNT CPUID feature bit to guest Documentation/virt/kvm/cpuid.rst | 3 ++ arch/x86/include/asm/kvm_host.h | 7 ++++ arch/x86/include/uapi/asm/kvm_para.h | 2 + arch/x86/kernel/kvm.c | 10 +++++ arch/x86/kvm/cpuid.c | 3 +- arch/x86/kvm/x86.c | 60 ++++++++++++++++++++++++++++ include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 7 ++++ 8 files changed, 92 insertions(+), 1 deletion(-) -- 2.25.1