On Fri, Apr 24, 2020 at 02:22:42PM +0800, Wanpeng Li wrote: > From: Wanpeng Li <wanpengli@xxxxxxxxxxx> > > Optimizing posted-interrupt delivery especially for the timer fastpath > scenario, I observe kvm_x86_ops.deliver_posted_interrupt() has more latency > then vmx_sync_pir_to_irr() in the case of timer fastpath scenario, since > it needs to wait vmentry, after that it can handle external interrupt, ack > the notification vector, read posted-interrupt descriptor etc, it is slower > than evaluate and delivery during vmentry immediately approach. Let's skip > sending interrupt to notify target pCPU and replace by vmx_sync_pir_to_irr() > before each cont_run. > > Tested-by: Haiwei Li <lihaiwei@xxxxxxxxxxx> > Cc: Haiwei Li <lihaiwei@xxxxxxxxxxx> > Signed-off-by: Wanpeng Li <wanpengli@xxxxxxxxxxx> > --- > arch/x86/kvm/vmx/vmx.c | 9 ++++++--- > virt/kvm/kvm_main.c | 1 + > 2 files changed, 7 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 5c21027..d21b66b 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -3909,8 +3909,9 @@ static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector) > if (pi_test_and_set_on(&vmx->pi_desc)) > return 0; > > - if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false)) > - kvm_vcpu_kick(vcpu); > + if (vcpu != kvm_get_running_vcpu() && > + !kvm_vcpu_trigger_posted_interrupt(vcpu, false)) Bad indentation. > + kvm_vcpu_kick(vcpu); > > return 0; > } > @@ -6757,8 +6758,10 @@ static enum exit_fastpath_completion vmx_vcpu_run(struct kvm_vcpu *vcpu) > if (!kvm_need_cancel_enter_guest(vcpu)) { > exit_fastpath = vmx_exit_handlers_fastpath(vcpu); > /* static call is better with retpolines */ > - if (exit_fastpath == EXIT_FASTPATH_CONT_RUN) > + if (exit_fastpath == EXIT_FASTPATH_CONT_RUN) { > + vmx_sync_pir_to_irr(vcpu); > goto cont_run; > + } > } > > return exit_fastpath; > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index e7436d0..6a289d1 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -4633,6 +4633,7 @@ struct kvm_vcpu *kvm_get_running_vcpu(void) > > return vcpu; > } > +EXPORT_SYMBOL_GPL(kvm_get_running_vcpu); > > /** > * kvm_get_running_vcpus - get the per-CPU array of currently running vcpus. > -- > 2.7.4 >