From: Wanpeng Li <wanpengli@xxxxxxxxxxx> Sometimes a vCPU kick is following a pending request, even if @vcpu is the running vCPU. It suffers from both rcuwait_wake_up() which has rcu/memory barrier operations and cmpxchg(). Let's check vcpu->wait before rcu_wait_wake_up() and whether @vcpu is the running vCPU before cmpxchg() to tax cut this overhead. We evaluate the kvm-unit-test/vmexit.flat on an Intel ICX box, most of the scores can improve ~600 cpu cycles especially when APICv is disabled. tscdeadline_immed tscdeadline self_ipi_sti_nop .............. x2apic_self_ipi_tpr_sti_hlt Suggested-by: Sean Christopherson <seanjc@xxxxxxxxxx> Signed-off-by: Wanpeng Li <wanpengli@xxxxxxxxxxx> --- v1 -> v2: * move checking running vCPU logic to kvm_vcpu_kick * check rcuwait_active(&vcpu->wait) etc virt/kvm/kvm_main.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 7851f3a1b5f7..18209d7b3711 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3314,8 +3314,15 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu) { int me, cpu; - if (kvm_vcpu_wake_up(vcpu)) - return; + me = get_cpu(); + + if (rcuwait_active(&vcpu->wait) && kvm_vcpu_wake_up(vcpu)) + goto out; + + if (vcpu == __this_cpu_read(kvm_running_vcpu)) { + WARN_ON_ONCE(vcpu->mode == IN_GUEST_MODE); + goto out; + } /* * Note, the vCPU could get migrated to a different pCPU at any point @@ -3324,12 +3331,12 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu) * IPI is to force the vCPU to leave IN_GUEST_MODE, and migrating the * vCPU also requires it to leave IN_GUEST_MODE. */ - me = get_cpu(); if (kvm_arch_vcpu_should_kick(vcpu)) { cpu = READ_ONCE(vcpu->cpu); if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu)) smp_send_reschedule(cpu); } +out: put_cpu(); } EXPORT_SYMBOL_GPL(kvm_vcpu_kick); -- 2.25.1