On Tue, Apr 28, 2020 at 03:12:51PM -0700, Jim Mattson wrote: > On Wed, Apr 22, 2020 at 7:26 PM Sean Christopherson > <sean.j.christopherson@xxxxxxxxx> wrote: > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > index 7c49a7dc601f..d9d6028a77e0 100644 > > --- a/arch/x86/kvm/x86.c > > +++ b/arch/x86/kvm/x86.c > > @@ -7755,24 +7755,10 @@ static int inject_pending_event(struct kvm_vcpu *vcpu) > > --vcpu->arch.nmi_pending; > > vcpu->arch.nmi_injected = true; > > kvm_x86_ops.set_nmi(vcpu); > > - } else if (kvm_cpu_has_injectable_intr(vcpu)) { > > - /* > > - * Because interrupts can be injected asynchronously, we are > > - * calling check_nested_events again here to avoid a race condition. > > - * See https://lkml.org/lkml/2014/7/2/60 for discussion about this > > - * proposal and current concerns. Perhaps we should be setting > > - * KVM_REQ_EVENT only on certain events and not unconditionally? > > - */ > > - if (is_guest_mode(vcpu) && kvm_x86_ops.check_nested_events) { > > - r = kvm_x86_ops.check_nested_events(vcpu); > > - if (r != 0) > > - return r; > > - } > > - if (kvm_x86_ops.interrupt_allowed(vcpu)) { > > - kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), > > - false); > > - kvm_x86_ops.set_irq(vcpu); > > - } > > + } else if (kvm_cpu_has_injectable_intr(vcpu) && > > + kvm_x86_ops.interrupt_injection_allowed(vcpu)) { > > + kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), false); > > + kvm_x86_ops.set_irq(vcpu); > > } > So, that's what this mess was all about! Well, this certainly looks better. Right? I can't count the number of times I've looked at this code and wondered what the hell it was doing. Side topic, I just realized you're reviewing my original series. Paolo commandeered it to extend it to SVM. https://patchwork.kernel.org/cover/11508679/