On Thu, Jun 16, 2016 at 1:21 AM, Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote: > This gains ~20 clock cycles per vmexit. On Intel there is no need > anymore to enable the interrupts in vmx_handle_external_intr, since we > are using the "acknowledge interrupt on exit" feature. AMD needs to do > that temporarily, and must be careful to avoid the interrupt shadow. > > Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> > --- > arch/x86/kvm/svm.c | 6 ++++++ > arch/x86/kvm/vmx.c | 4 +--- > arch/x86/kvm/x86.c | 11 ++--------- > 3 files changed, 9 insertions(+), 12 deletions(-) > > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index 5ff292778110..5bfdbbf1ce79 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -4935,6 +4935,12 @@ out: > static void svm_handle_external_intr(struct kvm_vcpu *vcpu) > { > local_irq_enable(); > + /* > + * We must execute an instruction with interrupts enabled, so > + * the "cli" doesn't fall right on the interrupt shadow. > + */ > + asm("nop"); > + local_irq_disable(); > } > > static void svm_sched_in(struct kvm_vcpu *vcpu, int cpu) > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index 4e9657730bf6..a46bce9e3683 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -8544,7 +8544,6 @@ static void vmx_handle_external_intr(struct kvm_vcpu *vcpu) > "push %[sp]\n\t" > #endif > "pushf\n\t" > - "orl $0x200, (%%" _ASM_SP ")\n\t" > __ASM_SIZE(push) " $%c[cs]\n\t" > "call *%[entry]\n\t" > : > @@ -8557,8 +8556,7 @@ static void vmx_handle_external_intr(struct kvm_vcpu *vcpu) > [ss]"i"(__KERNEL_DS), > [cs]"i"(__KERNEL_CS) > ); > - } else > - local_irq_enable(); > + } If you make the else case the same as svm_handle_external_intr, can we avoid requiring ack-intr-on-exit? > } > > static bool vmx_has_high_real_mode_segbase(void) > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 7e3041ef050f..cc741b68139c 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -6706,21 +6706,13 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) > > kvm_put_guest_xcr0(vcpu); > > - /* Interrupt is enabled by handle_external_intr() */ > kvm_x86_ops->handle_external_intr(vcpu); > > ++vcpu->stat.exits; > > - /* > - * We must have an instruction between local_irq_enable() and > - * kvm_guest_exit(), so the timer interrupt isn't delayed by > - * the interrupt shadow. The stat.exits increment will do nicely. > - * But we need to prevent reordering, hence this barrier(): > - */ > - barrier(); > - > - kvm_guest_exit(); > + __kvm_guest_exit(); > > + local_irq_enable(); > preempt_enable(); > > vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); > -- > 1.8.3.1 > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html