Jan Kiszka wrote: > Dmitry Eremin-Solenikov wrote: >> On Mon, Apr 13, 2009 at 12:55:43PM +0300, kvm-owner@xxxxxxxxxxxxxxx wrote: >>> Signed-off-by: Gleb Natapov <gleb@xxxxxxxxxx> >> The attached patch if applied on the top of the serie fixes the NMI issue on >> SVM. I did not refactor it on the top of this patch though, sorry. >> >> >> From 26d7e88c84089abbe871286d54e77ff2922dc33d Mon Sep 17 00:00:00 2001 >> From: Dmitry Eremin-Solenikov <dbaryshkov@xxxxxxxxx> >> Date: Fri, 17 Apr 2009 22:53:50 +0400 >> Subject: [PATCH] KVM: correct NMI injection logic wrt NMI window tracking >> >> inject_pending_irq() calls inject_irq() which disables nmi_pending flag >> if the nmi was injected. Thus for tracking we should use nmi_injected >> flag. This al fin fixes NMI injection on SVM. >> >> Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@xxxxxxxxx> >> --- >> arch/x86/kvm/x86.c | 2 +- >> 1 files changed, 1 insertions(+), 1 deletions(-) >> >> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c >> index e4cc717..eeed350 100644 >> --- a/arch/x86/kvm/x86.c >> +++ b/arch/x86/kvm/x86.c >> @@ -3160,7 +3160,7 @@ static void inject_pending_irq(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) >> inject_irq(vcpu); >> >> /* enable NMI/IRQ window open exits if needed */ >> - if (vcpu->arch.nmi_pending) >> + if (vcpu->arch.nmi_injected) >> kvm_x86_ops->enable_nmi_window(vcpu); >> else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) >> kvm_x86_ops->enable_irq_window(vcpu); > > Hmm, good to know that it works better now, but I'm afraid this papers > over an issue in svm (and will break other cases). The logic here is: We > injected something (IRQ or NMI), and if there is more pending, _then_ > enable the corresponding window. The check you changed should actually > only fire if we (re-)injected an IRQ for this round, and now there is > also an NMI pending. > > My feeling is that the real issue is in svm which probably fails to open > the NMI window on NMI injection. In contrast to latest Intel CPUs, we > have to do this unconditionally on AMD (no virtual NMI mask). And as > this is so, svm has to take care that this is done on injection, not > here via the generic code. What about setting INTERCEPT_IRET > additionally in svm_inject_nmi? > Yep, this also allows to inject >1 NMI here: diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index af61744..79b9d8b 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1831,7 +1831,7 @@ static int iret_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) { svm->vmcb->control.intercept &= ~(1UL << INTERCEPT_IRET); svm->vcpu.arch.hflags &= ~HF_NMI_MASK; - return 0; + return 1; } static int invlpg_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) @@ -2232,6 +2232,7 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu) struct vcpu_svm *svm = to_svm(vcpu); svm->vmcb->control.event_inj = SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_NMI; + svm->vmcb->control.intercept |= (1UL << INTERCEPT_IRET); svm->vcpu.arch.hflags |= HF_NMI_MASK; } Jan
Attachment:
signature.asc
Description: OpenPGP digital signature