On Thu, 2023-12-21 at 09:02 -0500, Yang Weijiang wrote: > Save CET SSP to SMRAM on SMI and reload it on RSM. KVM emulates HW arch > behavior when guest enters/leaves SMM mode,i.e., save registers to SMRAM > at the entry of SMM and reload them at the exit to SMM. Per SDM, SSP is > one of such registers on 64-bit Arch, and add the support for SSP. Note, > on 32-bit Arch, SSP is not defined in SMRAM, so fail 32-bit CET guest > launch. > > Suggested-by: Sean Christopherson <seanjc@xxxxxxxxxx> > Suggested-by: Chao Gao <chao.gao@xxxxxxxxx> > Signed-off-by: Yang Weijiang <weijiang.yang@xxxxxxxxx> > --- > arch/x86/kvm/cpuid.c | 11 +++++++++++ > arch/x86/kvm/smm.c | 8 ++++++++ > arch/x86/kvm/smm.h | 2 +- > 3 files changed, 20 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c > index 3ab133530573..cfc0ac8ddb4a 100644 > --- a/arch/x86/kvm/cpuid.c > +++ b/arch/x86/kvm/cpuid.c > @@ -149,6 +149,17 @@ static int kvm_check_cpuid(struct kvm_vcpu *vcpu, > if (vaddr_bits != 48 && vaddr_bits != 57 && vaddr_bits != 0) > return -EINVAL; > } > + /* > + * Prevent 32-bit guest from being launched if CET is exposed as SSP > + * state is not defined for 32-bit SMRAM. > + */ > + best = cpuid_entry2_find(entries, nent, 0x80000001, > + KVM_CPUID_INDEX_NOT_SIGNIFICANT); > + if (best && !(best->edx & F(LM))) { > + best = cpuid_entry2_find(entries, nent, 0x7, 0); > + if (best && ((best->ecx & F(SHSTK)) || (best->edx & F(IBT)))) > + return -EINVAL; > + } I honestly prefer a check in enter_smm_save_state_32 because SMM might not even be enabled/used for the guest, and for consistency with SVM check that I added, but whatever. Reviewed-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx> Best regards, Maxim Levitsky > > /* > * Exposing dynamic xfeatures to the guest requires additional > diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c > index 45c855389ea7..7aac9c54c353 100644 > --- a/arch/x86/kvm/smm.c > +++ b/arch/x86/kvm/smm.c > @@ -275,6 +275,10 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, > enter_smm_save_seg_64(vcpu, &smram->gs, VCPU_SREG_GS); > > smram->int_shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu); > + > + if (guest_can_use(vcpu, X86_FEATURE_SHSTK)) > + KVM_BUG_ON(kvm_msr_read(vcpu, MSR_KVM_SSP, &smram->ssp), > + vcpu->kvm); > } > #endif > > @@ -564,6 +568,10 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, > static_call(kvm_x86_set_interrupt_shadow)(vcpu, 0); > ctxt->interruptibility = (u8)smstate->int_shadow; > > + if (guest_can_use(vcpu, X86_FEATURE_SHSTK)) > + KVM_BUG_ON(kvm_msr_write(vcpu, MSR_KVM_SSP, smstate->ssp), > + vcpu->kvm); > + > return X86EMUL_CONTINUE; > } > #endif > diff --git a/arch/x86/kvm/smm.h b/arch/x86/kvm/smm.h > index a1cf2ac5bd78..1e2a3e18207f 100644 > --- a/arch/x86/kvm/smm.h > +++ b/arch/x86/kvm/smm.h > @@ -116,8 +116,8 @@ struct kvm_smram_state_64 { > u32 smbase; > u32 reserved4[5]; > > - /* ssp and svm_* fields below are not implemented by KVM */ > u64 ssp; > + /* svm_* fields below are not implemented by KVM */ > u64 svm_guest_pat; > u64 svm_host_efer; > u64 svm_host_cr4; Best regards, Maxim Levitsky