On 1/29/2025 5:58 PM, Adrian Hunter wrote:
From: Isaku Yamahata <isaku.yamahata@xxxxxxxxx>
Several user ret MSRs are clobbered on TD exit. Restore those values on
TD exit and before returning to ring 3.
Co-developed-by: Tony Lindgren <tony.lindgren@xxxxxxxxxxxxxxx>
Signed-off-by: Tony Lindgren <tony.lindgren@xxxxxxxxxxxxxxx>
Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx>
Signed-off-by: Adrian Hunter <adrian.hunter@xxxxxxxxx>
Reviewed-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>
---
TD vcpu enter/exit v2:
- No changes
TD vcpu enter/exit v1:
- Rename tdx_user_return_update_cache() ->
tdx_user_return_msr_update_cache() (extrapolated from Binbin)
- Adjust to rename in previous patches (Binbin)
- Simplify comment (Tony)
- Move code change in tdx_hardware_setup() to __tdx_bringup().
---
arch/x86/kvm/vmx/tdx.c | 44 +++++++++++++++++++++++++++++++++++++++++-
1 file changed, 43 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index e4355553569a..a0f5cdfd290b 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -729,6 +729,28 @@ int tdx_vcpu_pre_run(struct kvm_vcpu *vcpu)
return 1;
}
+struct tdx_uret_msr {
+ u32 msr;
+ unsigned int slot;
+ u64 defval;
+};
+
+static struct tdx_uret_msr tdx_uret_msrs[] = {
+ {.msr = MSR_SYSCALL_MASK, .defval = 0x20200 },
+ {.msr = MSR_STAR,},
+ {.msr = MSR_LSTAR,},
+ {.msr = MSR_TSC_AUX,},
+};
+
+static void tdx_user_return_msr_update_cache(void)
+{
+ int i;
I think it can be optimized to skip update cache if it the caches are
updated already. No need to update the cache after every TD exit.
+ for (i = 0; i < ARRAY_SIZE(tdx_uret_msrs); i++)
+ kvm_user_return_msr_update_cache(tdx_uret_msrs[i].slot,
+ tdx_uret_msrs[i].defval);
+}
+
static bool tdx_guest_state_is_invalid(struct kvm_vcpu *vcpu)
{
struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm);
@@ -784,6 +806,8 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
tdx_vcpu_enter_exit(vcpu);
+ tdx_user_return_msr_update_cache();
+
kvm_load_host_xsave_state(vcpu);
vcpu->arch.regs_avail &= ~TDX_REGS_UNSUPPORTED_SET;
@@ -2245,7 +2269,25 @@ static bool __init kvm_can_support_tdx(void)
static int __init __tdx_bringup(void)
{
const struct tdx_sys_info_td_conf *td_conf;
- int r;
+ int r, i;
+
+ for (i = 0; i < ARRAY_SIZE(tdx_uret_msrs); i++) {
+ /*
+ * Check if MSRs (tdx_uret_msrs) can be saved/restored
+ * before returning to user space.
+ *
+ * this_cpu_ptr(user_return_msrs)->registered isn't checked
+ * because the registration is done at vcpu runtime by
+ * tdx_user_return_msr_update_cache().
+ */
+ tdx_uret_msrs[i].slot = kvm_find_user_return_msr(tdx_uret_msrs[i].msr);
+ if (tdx_uret_msrs[i].slot == -1) {
+ /* If any MSR isn't supported, it is a KVM bug */
+ pr_err("MSR %x isn't included by kvm_find_user_return_msr\n",
+ tdx_uret_msrs[i].msr);
+ return -EIO;
+ }
+ }
/*
* Enabling TDX requires enabling hardware virtualization first,