The idea is from Xen, when sending a call-function IPI-many to vCPUs, yield if any of the IPI target vCPUs was preempted. 17% performance increase of ebizzy benchmark can be observed in an over-subscribe environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function IPI-many since call-function is not easy to be trigged by userspace workload). v1 -> v2: * check map is not NULL * check map->phys_map[dest_id] is not NULL * make kvm_sched_yield static * change dest_id to unsinged long Wanpeng Li (3): KVM: X86: Implement PV sched yield in linux guest KVM: X86: Implement PV sched yield hypercall KVM: X86: Expose PV_SCHED_YIELD CPUID feature bit to guest Documentation/virtual/kvm/cpuid.txt | 4 ++++ Documentation/virtual/kvm/hypercalls.txt | 11 +++++++++++ arch/x86/include/uapi/asm/kvm_para.h | 1 + arch/x86/kernel/kvm.c | 21 +++++++++++++++++++++ arch/x86/kvm/cpuid.c | 3 ++- arch/x86/kvm/x86.c | 26 ++++++++++++++++++++++++++ include/uapi/linux/kvm_para.h | 1 + 7 files changed, 66 insertions(+), 1 deletion(-) -- 2.7.4