On 28.05.19 02:53, Wanpeng Li wrote: > From: Wanpeng Li <wanpengli@xxxxxxxxxxx> > > The target vCPUs are in runnable state after vcpu_kick and suitable > as a yield target. This patch implements the sched yield hypercall. > > 17% performace increase of ebizzy benchmark can be observed in an > over-subscribe environment. (w/ kvm-pv-tlb disabled, testing TLB flush > call-function IPI-many since call-function is not easy to be trigged > by userspace workload). > > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> > Cc: Radim Krčmář <rkrcmar@xxxxxxxxxx> > Signed-off-by: Wanpeng Li <wanpengli@xxxxxxxxxxx> FWIW, we do have a similar interface in s390. See arch/s390/kvm/diag.c __diag_time_slice_end_directed for our implementation. > --- > arch/x86/kvm/x86.c | 24 ++++++++++++++++++++++++ > 1 file changed, 24 insertions(+) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index e7e57de..2ceef51 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -7172,6 +7172,26 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu) > kvm_x86_ops->refresh_apicv_exec_ctrl(vcpu); > } > > +void kvm_sched_yield(struct kvm *kvm, u64 dest_id) > +{ > + struct kvm_vcpu *target; > + struct kvm_apic_map *map; > + > + rcu_read_lock(); > + map = rcu_dereference(kvm->arch.apic_map); > + > + if (unlikely(!map)) > + goto out; > + > + if (map->phys_map[dest_id]->vcpu) { > + target = map->phys_map[dest_id]->vcpu; > + kvm_vcpu_yield_to(target); > + } > + > +out: > + rcu_read_unlock(); > +} > + > int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) > { > unsigned long nr, a0, a1, a2, a3, ret; > @@ -7218,6 +7238,10 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) > case KVM_HC_SEND_IPI: > ret = kvm_pv_send_ipi(vcpu->kvm, a0, a1, a2, a3, op_64_bit); > break; > + case KVM_HC_SCHED_YIELD: > + kvm_sched_yield(vcpu->kvm, a0); > + ret = 0; > + break; > default: > ret = -KVM_ENOSYS; > break; >