On 2016-03-22 06:28, Yuki Shibuya wrote: > Non maskable interrupts (NMI) are preferred to interrupts in current > implementation. If a NMI is pending and NMI is blocked by the result > of nmi_allowed(), pending interrupt is not injected and > enable_irq_window() is not executed, even if interrupts injection is > allowed. > > In old kernel (e.g. 2.6.32), schedule() is often called in NMI context. > In this case, interrupts are needed to execute iret that intends end > of NMI. The flag of blocking new NMI is not cleared until the guest > execute the iret, and interrupts are blocked by pending NMI. Due to > this, iret can't be invoked in the guest, and the guest is starved > until block is cleared by some events (e.g. canceling injection). > > This patch injects pending interrupts, when it's allowed, even if NMI > is blocked. And, if NMI pending count == 2, NMI is not blocked and an > interrupt is pending, NMI pending count is decremented to execute > enable_irq_window(). The first part I understand and agree with. But the part after "And" worries me still: if we can simply decrement that pending counter once again - why can't we just clear it to 0 in the first place? BTW, some inline documentation of this tricky logic would probably be good. Jan > > Signed-off-by: Yuki Shibuya <shibuya.yk@xxxxxxxxxxxxxx> > --- > arch/x86/kvm/x86.c | 12 +++++++----- > 1 file changed, 7 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 7236bd3..1373627 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -6087,12 +6087,14 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win) > } > > /* try to inject new event if pending */ > - if (vcpu->arch.nmi_pending) { > - if (kvm_x86_ops->nmi_allowed(vcpu)) { > + if (vcpu->arch.nmi_pending && kvm_x86_ops->nmi_allowed(vcpu)) { > + --vcpu->arch.nmi_pending; > + vcpu->arch.nmi_injected = true; > + kvm_x86_ops->set_nmi(vcpu); > + > + if (vcpu->arch.nmi_pending && kvm_cpu_has_injectable_intr(vcpu) > + && kvm_x86_ops->interrupt_allowed(vcpu)) > --vcpu->arch.nmi_pending; > - vcpu->arch.nmi_injected = true; > - kvm_x86_ops->set_nmi(vcpu); > - } > } else if (kvm_cpu_has_injectable_intr(vcpu)) { > /* > * Because interrupts can be injected asynchronously, we are > -- Siemens AG, Corporate Technology, CT RDA ITP SES-DE Corporate Competence Center Embedded Linux -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html