Windows Server 2016 with Hyper-V enabled fails to boot on OVMF with SMM (OVMF_CODE-need-smm.fd). Turns out that the SMM emulation code in KVM does not handle nested virtualization very well, leading to a whole bunch of issues. For example, Hyper-V uses descriptor table exiting (SECONDARY_EXEC_DESC) so when the SMM handler tries to switch from real mode a VM exit occurs and is forwarded to a clueless L1. This series fixes it by switching the vcpu to !guest_mode, i.e. to the L1 state, before entering SMM and then switching back to L2 after the RSM instruction is emulated. Patches 1 and 2 are common for both Intel and AMD, patch 3 fixes Intel, and patches 5-6 AMD. Patch 4 adds more state to the SMRAM save area as prescribed by the Intel SDM. It is however not required to make Windows work. v1->v2: * Moved left_smm detection to emulator_set_hflags (couldn't quite get rid of the field despite my original claim) (Paolo) * Moved the kvm_x86_ops->post_leave_smm() call a few statements down so it really runs after all state has been synced. * Added the smi_allowed callback (new patch 2) to avoid running into WARN_ON_ONCE(vmx->nested.nested_run_pending) on Intel. Ladi Prosek (6): KVM: x86: introduce ISA specific SMM entry/exit callbacks KVM: x86: introduce ISA specific smi_allowed callback KVM: nVMX: fix SMI injection in guest mode KVM: nVMX: save nested EPT information in SMRAM state save map KVM: nSVM: refactor nested_svm_vmrun KVM: nSVM: fix SMI injection in guest mode arch/x86/include/asm/kvm_emulate.h | 1 + arch/x86/include/asm/kvm_host.h | 9 ++ arch/x86/kvm/svm.c | 186 ++++++++++++++++++++++++------------- arch/x86/kvm/vmx.c | 91 ++++++++++++++++-- arch/x86/kvm/x86.c | 17 +++- 5 files changed, 227 insertions(+), 77 deletions(-)