This is a note to let you know that I've just added the patch titled KVM: x86: Emulate triple fault shutdown if RSM emulation fails to the 5.10-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: kvm-x86-emulate-triple-fault-shutdown-if-rsm-emulati.patch and it can be found in the queue-5.10 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 9d1b2beac4b61cf1f30aa0dddc85547ed61eb2cb Author: Sean Christopherson <seanjc@xxxxxxxxxx> Date: Wed Jun 9 11:56:12 2021 -0700 KVM: x86: Emulate triple fault shutdown if RSM emulation fails [ Upstream commit 25b17226cd9a77982fc8c915d4118d7238a0f079 ] Use the recently introduced KVM_REQ_TRIPLE_FAULT to properly emulate shutdown if RSM from SMM fails. Note, entering shutdown after clearing the SMM flag and restoring NMI blocking is architecturally correct with respect to AMD's APM, which KVM also uses for SMRAM layout and RSM NMI blocking behavior. The APM says: An RSM causes a processor shutdown if an invalid-state condition is found in the SMRAM state-save area. Only an external reset, external processor-initialization, or non-maskable external interrupt (NMI) can cause the processor to leave the shutdown state. Of note is processor-initialization (INIT) as a valid shutdown wake event, as INIT is blocked by SMM, implying that entering shutdown also forces the CPU out of SMM. For recent Intel CPUs, restoring NMI blocking is technically wrong, but so is restoring NMI blocking in the first place, and Intel's RSM "architecture" is such a mess that just about anything is allowed and can be justified as micro-architectural behavior. Per the SDM: On Pentium 4 and later processors, shutdown will inhibit INTR and A20M but will not change any of the other inhibits. On these processors, NMIs will be inhibited if no action is taken in the SMI handler to uninhibit them (see Section 34.8). where Section 34.8 says: When the processor enters SMM while executing an NMI handler, the processor saves the SMRAM state save map but does not save the attribute to keep NMI interrupts disabled. Potentially, an NMI could be latched (while in SMM or upon exit) and serviced upon exit of SMM even though the previous NMI handler has still not completed. I.e. RSM unconditionally unblocks NMI, but shutdown on RSM does not, which is in direct contradiction of KVM's behavior. But, as mentioned above, KVM follows AMD architecture and restores NMI blocking on RSM, so that micro-architectural detail is already lost. And for Pentium era CPUs, SMI# can break shutdown, meaning that at least some Intel CPUs fully leave SMM when entering shutdown: In the shutdown state, Intel processors stop executing instructions until a RESET#, INIT# or NMI# is asserted. While Pentium family processors recognize the SMI# signal in shutdown state, P6 family and Intel486 processors do not. In other words, the fact that Intel CPUs have implemented the two extremes gives KVM carte blanche when it comes to honoring Intel's architecture for handling shutdown during RSM. Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> Message-Id: <20210609185619.992058-3-seanjc@xxxxxxxxxx> [Return X86EMUL_CONTINUE after triple fault. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> Stable-dep-of: 055f37f84e30 ("KVM: x86: emulator: update the emulation mode after rsm") Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 63efccc8f429..89ad10261d90 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2730,7 +2730,7 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) * state-save area. */ if (ctxt->ops->pre_leave_smm(ctxt, buf)) - return X86EMUL_UNHANDLEABLE; + goto emulate_shutdown; #ifdef CONFIG_X86_64 if (emulator_has_longmode(ctxt)) @@ -2739,14 +2739,16 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) #endif ret = rsm_load_state_32(ctxt, buf); - if (ret != X86EMUL_CONTINUE) { - /* FIXME: should triple fault */ - return X86EMUL_UNHANDLEABLE; - } + if (ret != X86EMUL_CONTINUE) + goto emulate_shutdown; ctxt->ops->post_leave_smm(ctxt); return X86EMUL_CONTINUE; + +emulate_shutdown: + ctxt->ops->triple_fault(ctxt); + return X86EMUL_CONTINUE; } static void diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h index aeed6da60e0c..1da3f77a8728 100644 --- a/arch/x86/kvm/kvm_emulate.h +++ b/arch/x86/kvm/kvm_emulate.h @@ -234,6 +234,7 @@ struct x86_emulate_ops { int (*pre_leave_smm)(struct x86_emulate_ctxt *ctxt, const char *smstate); void (*post_leave_smm)(struct x86_emulate_ctxt *ctxt); + void (*triple_fault)(struct x86_emulate_ctxt *ctxt); int (*set_xcr)(struct x86_emulate_ctxt *ctxt, u32 index, u64 xcr); }; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 23d7c563e012..20dc108f2c4c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7018,6 +7018,11 @@ static void emulator_post_leave_smm(struct x86_emulate_ctxt *ctxt) kvm_smm_changed(emul_to_vcpu(ctxt)); } +static void emulator_triple_fault(struct x86_emulate_ctxt *ctxt) +{ + kvm_make_request(KVM_REQ_TRIPLE_FAULT, emul_to_vcpu(ctxt)); +} + static int emulator_set_xcr(struct x86_emulate_ctxt *ctxt, u32 index, u64 xcr) { return __kvm_set_xcr(emul_to_vcpu(ctxt), index, xcr); @@ -7068,6 +7073,7 @@ static const struct x86_emulate_ops emulate_ops = { .set_hflags = emulator_set_hflags, .pre_leave_smm = emulator_pre_leave_smm, .post_leave_smm = emulator_post_leave_smm, + .triple_fault = emulator_triple_fault, .set_xcr = emulator_set_xcr, };