[PATCH v2 6/6] KVM: nSVM: fix SMI injection in guest mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Entering SMM while running in guest mode wasn't working very well because several
pieces of the vcpu state were left set up for nested operation.

Some of the issues observed:

* L1 was getting unexpected VM exits (using L1 interception controls but running
  in SMM execution environment)
* MMU was confused (walk_mmu was still set to nested_mmu)

Intel SDM actually prescribes the logical processor to "leave VMX operation" upon
entering SMM in 34.14.1 Default Treatment of SMI Delivery. AMD doesn't seem to
document this but handling SMM in the same fashion is a safe bet. What we need
to do is basically get out of guest mode for the duration of SMM. All this
completely transparent to L1, i.e. L1 is not given control and no L1 observable
state changes.

To avoid code duplication this commit takes advantage of the existing nested
vmexit and run functionality, perhaps at the cost of efficiency. To get out of
guest mode, nested_svm_vmexit is called, unchanged. Re-entering is performed using
enter_svm_guest_mode.

This commit fixes running Windows Server 2016 with Hyper-V enabled in a VM with
OVMF firmware (OVMF_CODE-need-smm.fd).

Signed-off-by: Ladi Prosek <lprosek@xxxxxxxxxx>
---
 arch/x86/kvm/svm.c | 37 +++++++++++++++++++++++++++++++++++--
 1 file changed, 35 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 45dcd29ff744..f592f1c271fb 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -152,6 +152,14 @@ struct nested_state {
 
 	/* Nested Paging related state */
 	u64 nested_cr3;
+
+	/* SMM related state */
+	struct {
+		/* in guest mode on SMM entry? */
+		bool guest_mode;
+		/* current nested VMCB on SMM entry */
+		u64 vmcb;
+	} smm;
 };
 
 #define MSRPM_OFFSETS	16
@@ -5370,13 +5378,38 @@ static int svm_smi_allowed(struct kvm_vcpu *vcpu)
 
 static int svm_prep_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
 {
-	/* TODO: Implement */
+	struct vcpu_svm *svm = to_svm(vcpu);
+	int ret;
+
+	svm->nested.smm.guest_mode = is_guest_mode(vcpu);
+	if (svm->nested.smm.guest_mode) {
+		svm->nested.smm.vmcb = svm->nested.vmcb;
+
+		svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX];
+		svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP];
+		svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP];
+
+		ret = nested_svm_vmexit(svm);
+		if (ret)
+			return ret;
+	}
 	return 0;
 }
 
 static int svm_post_leave_smm(struct kvm_vcpu *vcpu)
 {
-	/* TODO: Implement */
+	struct vcpu_svm *svm = to_svm(vcpu);
+	struct vmcb *nested_vmcb;
+	struct page *page;
+
+	if (svm->nested.smm.guest_mode) {
+		nested_vmcb = nested_svm_map(svm, svm->nested.smm.vmcb, &page);
+		if (!nested_vmcb)
+			return 1;
+		enter_svm_guest_mode(svm, svm->nested.smm.vmcb, nested_vmcb, page);
+
+		svm->nested.smm.guest_mode = false;
+	}
 	return 0;
 }
 
-- 
2.13.5




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux