On Tue, Mar 26, 2019 at 02:07:46PM +0100, Vitaly Kuznetsov wrote: > Commit 5bea5123cbf0 ("KVM: VMX: check nested state and CR4.VMXE against > SMM") introduced a check to vmx_set_cr4() forbidding to set VMXE from SMM. > The check is correct, however, there is a special case when RSM is called > to leave SMM: rsm_enter_protected_mode() is called with HF_SMM_MASK still > set and in case VMXE was set before entering SMM we're failing to return. > > Resolve the issue by temporary dropping HF_SMM_MASK around set_cr4() calls > when ops->set_cr() is called from RSM. > > Reported-by: Jon Doron <arilou@xxxxxxxxx> > Suggested-by: Liran Alon <liran.alon@xxxxxxxxxx> > Fixes: 5bea5123cbf0 ("KVM: VMX: check nested state and CR4.VMXE against SMM") > Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> Plumbing from_rsm all the way to set_cr() is pretty heinous. What about going with the idea Jim alluded to, i.e. manually save/restore VMXE during transitions to/from SMM? It's a little more tedious than it aught to be due to the placement and naming of the x86_ops hooks for SMM, but IMO the end result is cleaner even after adding the necessary callbacks. The following patches are compile tested only. Sean Christopherson (2): KVM: x86: Rename pre_{enter,leave}_smm() ops to reference SMM state save KVM: x86: Add kvm_x86_ops callback to allow VMX to stash away CR4.VMXE arch/x86/include/asm/kvm_emulate.h | 3 ++- arch/x86/include/asm/kvm_host.h | 6 ++++-- arch/x86/kvm/emulate.c | 10 ++++++---- arch/x86/kvm/svm.c | 20 ++++++++++++++++---- arch/x86/kvm/vmx/vmx.c | 30 ++++++++++++++++++++++++++---- arch/x86/kvm/vmx/vmx.h | 2 ++ arch/x86/kvm/x86.c | 23 ++++++++++++++++------- 7 files changed, 72 insertions(+), 22 deletions(-) -- 2.21.0