Gleb Natapov wrote: > Signed-off-by: Gleb Natapov <gleb@xxxxxxxxxx> > --- > arch/x86/include/asm/kvm_host.h | 1 + > arch/x86/kvm/svm.c | 49 +++++++++++++++++++++++++++++++++++++- > 2 files changed, 48 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 8b6f6e9..057a612 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -766,6 +766,7 @@ enum { > #define HF_GIF_MASK (1 << 0) > #define HF_HIF_MASK (1 << 1) > #define HF_VINTR_MASK (1 << 2) > +#define HF_NMI_MASK (1 << 3) > > /* > * Hardware virtualization extension instructions may fault if a > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index c605477..cd60fd7 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -1834,6 +1834,13 @@ static int cpuid_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) > return 1; > } > > +static int iret_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) > +{ > + svm->vmcb->control.intercept &= ~(1UL << INTERCEPT_IRET); > + svm->vcpu.arch.hflags &= ~HF_NMI_MASK; > + return 0; > +} First, this must return 1 (or set an exit reason, but there is no reason to escape to user space here). And second, I think a corner case is not handled the same way as on real iron: If there is already the next NMI waiting, we will inject it before iret, not after its execution as it should be. No easy solution for this yet. Maybe emulating iret, but there is no implementation, specifically for protected mode. Maybe setting a breakpoint. Or maybe enforcing a single step exception. Nothing trivial in this list. On the other hand, this may only be a slight imprecision of the virtualization. Need to think about it. Jan
Attachment:
signature.asc
Description: OpenPGP digital signature