On Thu, Jul 20, 2023 at 11:03:43PM -0400, Yang Weijiang wrote: >Save GUEST_SSP to SMRAM on SMI and reload it on RSM. >KVM emulates architectural behavior when guest enters/leaves SMM >mode, i.e., save registers to SMRAM at the entry of SMM and reload >them at the exit of SMM. Per SDM, GUEST_SSP is defined as one of To me, GUEST_SSP is confusing here. From QEMU's perspective, it reads/writes the SSP register. People may confuse it with the GUEST_SSP in nVMCS field. I prefer to rename it to MSR_KVM_SSP. >the fields in SMRAM for 64-bit mode, so handle the state accordingly. > >Check HF_SMM_MASK to determine whether kvm_cet_is_msr_accessible() >is called in SMM mode so that kvm_{set,get}_msr() works in SMM mode. > >Signed-off-by: Yang Weijiang <weijiang.yang@xxxxxxxxx> >--- > arch/x86/kvm/smm.c | 17 +++++++++++++++++ > arch/x86/kvm/smm.h | 2 +- > arch/x86/kvm/x86.c | 12 +++++++++++- > 3 files changed, 29 insertions(+), 2 deletions(-) > >diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c >index b42111a24cc2..a4e19d72224f 100644 >--- a/arch/x86/kvm/smm.c >+++ b/arch/x86/kvm/smm.c >@@ -309,6 +309,15 @@ void enter_smm(struct kvm_vcpu *vcpu) > > kvm_smm_changed(vcpu, true); > >+#ifdef CONFIG_X86_64 >+ if (guest_can_use(vcpu, X86_FEATURE_SHSTK)) { >+ u64 data; >+ >+ if (!kvm_get_msr(vcpu, MSR_KVM_GUEST_SSP, &data)) >+ smram.smram64.ssp = data; I don't think it is correct to continue if kvm fails to read the MSR. how about: if (kvm_get_msr(vcpu, MSR_KVM_GUEST_SSP, &smram.smram64.ssp)) goto error; >+ } >+#endif >+ > if (kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, &smram, sizeof(smram))) > goto error; > >@@ -586,6 +595,14 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt) > if ((vcpu->arch.hflags & HF_SMM_INSIDE_NMI_MASK) == 0) > static_call(kvm_x86_set_nmi_mask)(vcpu, false); > >+#ifdef CONFIG_X86_64 >+ if (guest_can_use(vcpu, X86_FEATURE_SHSTK)) { >+ u64 data = smram.smram64.ssp; >+ >+ if (is_noncanonical_address(data, vcpu) && IS_ALIGNED(data, 4)) shouldn't the checks be already done inside kvm_set_msr()? >+ kvm_set_msr(vcpu, MSR_KVM_GUEST_SSP, data); please handle the failure case. Probably just return X86EMUL_UNHANDLEABLE like other failure paths in this function. >+ } >+#endif > kvm_smm_changed(vcpu, false); > > /* >diff --git a/arch/x86/kvm/smm.h b/arch/x86/kvm/smm.h >index a1cf2ac5bd78..b3efef7cb1dc 100644 >--- a/arch/x86/kvm/smm.h >+++ b/arch/x86/kvm/smm.h >@@ -116,7 +116,7 @@ struct kvm_smram_state_64 { > u32 smbase; > u32 reserved4[5]; > >- /* ssp and svm_* fields below are not implemented by KVM */ >+ /* svm_* fields below are not implemented by KVM */ move this comment one line downward > u64 ssp; > u64 svm_guest_pat; > u64 svm_host_efer; >diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c >index f7558f0f6fc0..70d7c80889d6 100644 >--- a/arch/x86/kvm/x86.c >+++ b/arch/x86/kvm/x86.c >@@ -3653,8 +3653,18 @@ static bool kvm_cet_is_msr_accessible(struct kvm_vcpu *vcpu, > if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK)) > return false; > >- if (msr->index == MSR_KVM_GUEST_SSP) >+ /* >+ * This MSR is synthesized mainly for userspace access during >+ * Live Migration, it also can be accessed in SMM mode by VMM. >+ * Guest is not allowed to access this MSR. >+ */ >+ if (msr->index == MSR_KVM_GUEST_SSP) { >+ if (IS_ENABLED(CONFIG_X86_64) && >+ !!(vcpu->arch.hflags & HF_SMM_MASK)) use is_smm() instead. >+ return true; >+ > return msr->host_initiated; >+ } > > return msr->host_initiated || > guest_cpuid_has(vcpu, X86_FEATURE_SHSTK); >-- >2.27.0 >