On 10/10/2017 14:17, Ladi Prosek wrote: > + /* Temporarily set the SMM flag to access the SMM state-save area */ > + vcpu->arch.hflags |= HF_SMM_MASK; > + r = kvm_vcpu_read_guest(vcpu, smbase + 0xfed8, &svm_state_save, > + sizeof(svm_state_save)); > + vcpu->arch.hflags &= ~HF_SMM_MASK; > + if (r) Isn't the flag still set here? You have: + if (ctxt->ops->post_leave_smm(ctxt, smbase, &reload_state)) + return X86EMUL_UNHANDLEABLE; + + if (reload_state) { + /* + * post_leave_smm() made changes to the vCPU (e.g. entered + * guest mode) and is asking us to load the SMM state-save + * area again. + */ + ret = rsm_load_state(ctxt, smbase); + if (ret != X86EMUL_CONTINUE) { + /* FIXME: should triple fault */ + return X86EMUL_UNHANDLEABLE; + } + } + if ((ctxt->ops->get_hflags(ctxt) & X86EMUL_SMM_INSIDE_NMI_MASK) == 0) ctxt->ops->set_nmi_mask(ctxt, false); ctxt->ops->set_hflags(ctxt, ctxt->ops->get_hflags(ctxt) & ~(X86EMUL_SMM_INSIDE_NMI_MASK | X86EMUL_SMM_MASK)); On the other hand you need to turn HF_SMM_MASK off and back on around nested_svm_map/enter_svm_guest_mode, so that the VMCB is not read from SMRAM. And the mandatory stupid question: why is the first state load needed at all? Could VMX's post_leave_smm callback (which would really become a pre_leave_smm callback) use load_vmcs12_host_state instead. SVM likewise could extract load_from_hsave_area from nested_svm_vmexit, and pre-load the SMM state-save area data in pre_leave_smm. (Yes, load_vmcs12_host_state is overkill, but it's already there and it already knows how to do things like entering protected mode etc.). Instead, post_leave_smm would run with the SMM flag clear, so hopefully you wouldn't need games with HF_SMM_MASK at all (except for the reserved VMXE flag---let's leave that out for now and concentrate on the right way to do L2 state save restore...). Paolo