On 1/19/21 12:31 PM, Sean Christopherson wrote: > On Fri, Jan 15, 2021, Babu Moger wrote: >> --- >> arch/x86/include/asm/svm.h | 4 +++- >> arch/x86/kvm/svm/sev.c | 4 ++++ >> arch/x86/kvm/svm/svm.c | 19 +++++++++++++++---- >> 3 files changed, 22 insertions(+), 5 deletions(-) >> >> diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h >> index 1c561945b426..772e60efe243 100644 >> --- a/arch/x86/include/asm/svm.h >> +++ b/arch/x86/include/asm/svm.h >> @@ -269,7 +269,9 @@ struct vmcb_save_area { >> * SEV-ES guests when referenced through the GHCB or for >> * saving to the host save area. >> */ >> - u8 reserved_7[80]; >> + u8 reserved_7[72]; >> + u32 spec_ctrl; /* Guest version of SPEC_CTRL at 0x2E0 */ >> + u8 reserved_7b[4]; > > Don't nested_prepare_vmcb_save() and nested_vmcb_checks() need to be updated to > handle the new field, too? Ok. Sure. I will check and test few combinations to make sure of these changes. > >> u32 pkru; >> u8 reserved_7a[20]; >> u64 reserved_8; /* rax already available at 0x01f8 */ >> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c >> index c8ffdbc81709..959d6e47bd84 100644 >> --- a/arch/x86/kvm/svm/sev.c >> +++ b/arch/x86/kvm/svm/sev.c >> @@ -546,6 +546,10 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) >> save->pkru = svm->vcpu.arch.pkru; >> save->xss = svm->vcpu.arch.ia32_xss; >> >> + /* Update the guest SPEC_CTRL value in the save area */ >> + if (boot_cpu_has(X86_FEATURE_V_SPEC_CTRL)) >> + save->spec_ctrl = svm->spec_ctrl; > > I think this can be dropped if svm->spec_ctrl is unused when V_SPEC_CTRL is > supported (see below). IIUC, the memcpy() that's just out of sight would do > the propgation to the VMSA. Yes, That is right. I will remove this. > >> + >> /* >> * SEV-ES will use a VMSA that is pointed to by the VMCB, not >> * the traditional VMSA that is part of the VMCB. Copy the >> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c >> index 7ef171790d02..a0cb01a5c8c5 100644 >> --- a/arch/x86/kvm/svm/svm.c >> +++ b/arch/x86/kvm/svm/svm.c >> @@ -1244,6 +1244,9 @@ static void init_vmcb(struct vcpu_svm *svm) >> >> svm_check_invpcid(svm); >> >> + if (boot_cpu_has(X86_FEATURE_V_SPEC_CTRL)) >> + save->spec_ctrl = svm->spec_ctrl; >> + >> if (kvm_vcpu_apicv_active(&svm->vcpu)) >> avic_init_vmcb(svm); >> >> @@ -3789,7 +3792,10 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) >> * is no need to worry about the conditional branch over the wrmsr >> * being speculatively taken. >> */ >> - x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl); >> + if (static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) >> + svm->vmcb->save.spec_ctrl = svm->spec_ctrl; >> + else >> + x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl); > > Can't we avoid functional code in svm_vcpu_run() entirely when V_SPEC_CTRL is > supported? Make this code a nop, disable interception from time zero, and Sean, I thought you mentioned earlier about not changing the interception mechanism. Do you think we should disable the interception right away if V_SPEC_CTRL is supported? > read/write the VMBC field in svm_{get,set}_msr(). I.e. don't touch > svm->spec_ctrl if V_SPEC_CTRL is supported. > > if (!static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) > x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl); > > svm_vcpu_enter_exit(vcpu, svm); > > if (!static_cpu_has(X86_FEATURE_V_SPEC_CTRL) && > unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) > svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); Ok. It appears the above code might work fine with changes in svm_{get,set}_msr() to update save spec_ctlr. I will retest few combinations to make sure it works. Thanks Babu > >> svm_vcpu_enter_exit(vcpu, svm); >> >> @@ -3808,13 +3814,18 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) >> * If the L02 MSR bitmap does not intercept the MSR, then we need to >> * save it. >> */ >> - if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) >> - svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); >> + if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) { >> + if (static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) >> + svm->spec_ctrl = svm->vmcb->save.spec_ctrl; >> + else >> + svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); >> + } >> >> if (!sev_es_guest(svm->vcpu.kvm)) >> reload_tss(vcpu); >> >> - x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl); >> + if (!static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) >> + x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl); >> >> if (!sev_es_guest(svm->vcpu.kvm)) { >> vcpu->arch.cr2 = svm->vmcb->save.cr2; >>