These are few SMM fixes I was working on last week. * First patch fixes a minor issue that remained after commit 37be407b2ce8 ("KVM: nSVM: Fix L1 state corruption upon return from SMM") While now, returns to guest mode from SMM work due to restored state from HSAVE area, the guest entry still sees incorrect HSAVE state. This for example breaks return from SMM when the guest is 32 bit, due to PDPTRs loading which are done using incorrect MMU state which is incorrect, because it was setup with incorrect L1 HSAVE state. * 2nd patch fixes a theoretical issue that I introduced with my SREGS2 patchset, which Sean Christopherson pointed out. The issue is that KVM_REQ_GET_NESTED_STATE_PAGES request is not only used for completing the load of the nested state, but it is also used to complete exit from SMM to guest mode, and my compatibility hack of pdptrs_from_userspace was done assuming that this is not done. While it is safe to just reset 'pdptrs_from_userspace' on each VM entry, I don't want to slow down the common code for this very rare hack. Instead I explicitly zero this variable when SMM exit to guest mode is done, because in this case PDPTRs do need to be reloaded from memory always. Note that this is a theoretical issue only, because after 'vendor' return from smm code (aka .leave_smm) is done, even when it returned to the guest mode, which loads some of L2 CPU state, we still load again all of the L2 cpu state captured in SMRAM which includes CR3, at which point guest PDPTRs are re-loaded anyway. Also note that across SMI entries the CR3 seems not to be updated, and Intel's SDM notes that it saved value in SMRAM isn't writable, thus it is possible that if SMM handler didn't change CR3, the pdptrs would not be touched. I guess that means that a SMI handler can in theory preserve PDPTRs by never touching CR3, but since recently we removed that code that didn't update PDPTRs if CR3 didn't change, I guess it won't work. Anyway I don't think any OS bothers to have PDPTRs not synced with whatever page CR3 points at, thus I didn't bother to try and test what the real hardware does in this case. * 3rd patch makes SVM SMM exit to be a bit more similar to how VMX does it by also raising KVM_REQ_GET_NESTED_STATE_PAGES requests. I do have doubts about why we need to do this on VMX though. The initial justification for this comes from 7f7f1ba33cf2 ("KVM: x86: do not load vmcs12 pages while still in SMM") With all the MMU changes, I am not sure that we can still have a case of not up to date MMU when we enter the nested guest from SMM. On SVM it does seem to work anyway without this. I still track another SMM issue, which I debugged a bit today but still no lead on what is going on: When HyperV guest is running nested, and uses SMM enabled OVMF, it crashes and reboots during the boot process. Best regards, Maxim Levitsky Maxim Levitsky (3): KVM: nSVM: restore the L1 host state prior to resuming a nested guest on SMM exit KVM: x86: force PDPTRs reload on SMM exit KVM: nSVM: call KVM_REQ_GET_NESTED_STATE_PAGES on exit from SMM mode arch/x86/kvm/svm/nested.c | 9 ++++++--- arch/x86/kvm/svm/svm.c | 27 ++++++++++++++++++--------- arch/x86/kvm/svm/svm.h | 3 ++- arch/x86/kvm/vmx/vmx.c | 7 +++++++ 4 files changed, 33 insertions(+), 13 deletions(-) -- 2.26.3