> -----Original Message----- > From: Lendacky, Thomas <Thomas.Lendacky@xxxxxxx> > Sent: Monday, January 4, 2021 9:47 AM > To: Moger, Babu <Babu.Moger@xxxxxxx>; pbonzini@xxxxxxxxxx; > tglx@xxxxxxxxxxxxx; mingo@xxxxxxxxxx; bp@xxxxxxxxx > Cc: fenghua.yu@xxxxxxxxx; tony.luck@xxxxxxxxx; wanpengli@xxxxxxxxxxx; > kvm@xxxxxxxxxxxxxxx; peterz@xxxxxxxxxxxxx; seanjc@xxxxxxxxxx; > joro@xxxxxxxxxx; x86@xxxxxxxxxx; kyung.min.park@xxxxxxxxx; linux- > kernel@xxxxxxxxxxxxxxx; krish.sadhukhan@xxxxxxxxxx; hpa@xxxxxxxxx; > mgross@xxxxxxxxxxxxxxx; vkuznets@xxxxxxxxxx; Phillips, Kim > <kim.phillips@xxxxxxx>; Huang2, Wei <Wei.Huang2@xxxxxxx>; > jmattson@xxxxxxxxxx > Subject: Re: [PATCH v2 2/2] KVM: SVM: Add support for Virtual SPEC_CTRL > > On 12/22/20 4:31 PM, Babu Moger wrote: > > Newer AMD processors have a feature to virtualize the use of the > > SPEC_CTRL MSR. A hypervisor may wish to impose speculation controls on > > guest execution or a guest may want to impose its own speculation > > controls. Therefore, the processor implements both host and guest > > versions of SPEC_CTRL. Presence of this feature is indicated via CPUID > > function 0x8000000A_EDX[20]: GuestSpecCtrl. Hypervisors are not > > required to enable this feature since it is automatically enabled on > > processors that support it. > > > > When in host mode, the host SPEC_CTRL value is in effect and writes > > update only the host version of SPEC_CTRL. On a VMRUN, the processor > > loads the guest version of SPEC_CTRL from the VMCB. When the guest > > writes SPEC_CTRL, only the guest version is updated. On a VMEXIT, the > > guest version is saved into the VMCB and the processor returns to only > > using the host SPEC_CTRL for speculation control. The guest SPEC_CTRL > > is located at offset 0x2E0 in the VMCB. > > With the SEV-ES hypervisor support now in the tree, this will need to add support > in sev_es_sync_vmsa() to put the initial svm->spec_ctrl value in the SEV-ES > VMSA. > > > > > The effective SPEC_CTRL setting is the guest SPEC_CTRL setting or'ed > > with the hypervisor SPEC_CTRL setting. This allows the hypervisor to > > ensure a minimum SPEC_CTRL if desired. > > > > This support also fixes an issue where a guest may sometimes see an > > inconsistent value for the SPEC_CTRL MSR on processors that support > > this feature. With the current SPEC_CTRL support, the first write to > > SPEC_CTRL is intercepted and the virtualized version of the SPEC_CTRL > > MSR is not updated. When the guest reads back the SPEC_CTRL MSR, it > > will be 0x0, instead of the actual expected value. There isn’t a > > security concern here, because the host SPEC_CTRL value is or’ed with > > the Guest SPEC_CTRL value to generate the effective SPEC_CTRL value. > > KVM writes with the guest's virtualized SPEC_CTRL value to SPEC_CTRL > > MSR just before the VMRUN, so it will always have the actual value > > even though it doesn’t appear that way in the guest. The guest will > > only see the proper value for the SPEC_CTRL register if the guest was > > to write to the SPEC_CTRL register again. With Virtual SPEC_CTRL > > support, the MSR interception of SPEC_CTRL is disabled during > > vmcb_init, so this will no longer be an issue. > > > > Signed-off-by: Babu Moger <babu.moger@xxxxxxx> > > --- > > arch/x86/include/asm/svm.h | 4 +++- > > arch/x86/kvm/svm/svm.c | 29 +++++++++++++++++++++++++---- > > 2 files changed, 28 insertions(+), 5 deletions(-) > > > > diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h > > index 71d630bb5e08..753b25db427c 100644 > > --- a/arch/x86/include/asm/svm.h > > +++ b/arch/x86/include/asm/svm.h > > @@ -248,12 +248,14 @@ struct vmcb_save_area { > > u64 br_to; > > u64 last_excp_from; > > u64 last_excp_to; > > + u8 reserved_12[72]; > > + u32 spec_ctrl; /* Guest version of SPEC_CTRL at 0x2E0 */ > > > > /* > > * The following part of the save area is valid only for > > * SEV-ES guests when referenced through the GHCB. > > */ > > - u8 reserved_7[104]; > > + u8 reserved_7[28]; > > u64 reserved_8; /* rax already available at 0x01f8 */ > > u64 rcx; > > u64 rdx; > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index > > 79b3a564f1c9..6d3db3e8cdfe 100644 > > --- a/arch/x86/kvm/svm/svm.c > > +++ b/arch/x86/kvm/svm/svm.c > > @@ -1230,6 +1230,16 @@ static void init_vmcb(struct vcpu_svm *svm) > > > > svm_check_invpcid(svm); > > > > + /* > > + * If the host supports V_SPEC_CTRL then disable the interception > > + * of MSR_IA32_SPEC_CTRL. > > + */ > > + if (boot_cpu_has(X86_FEATURE_V_SPEC_CTRL)) { > > + save->spec_ctrl = svm->spec_ctrl; > > + set_msr_interception(&svm->vcpu, svm->msrpm, > > + MSR_IA32_SPEC_CTRL, 1, 1); > > + } > > + > > I thought Jim's feedback was to keep the support as originally coded with > respect to the MSR intercept and only update the svm_vcpu_run() to either > read/write the MSR or read/write the save area value based on the feature. > So I think this can be removed. Ok. Sure. Will remove this change. > > > if (kvm_vcpu_apicv_active(&svm->vcpu)) > > avic_init_vmcb(svm); > > > > @@ -2549,7 +2559,10 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, > struct msr_data *msr_info) > > !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD)) > > return 1; > > > > - msr_info->data = svm->spec_ctrl; > > + if (static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) > > + msr_info->data = svm->vmcb->save.spec_ctrl; > > + else > > + msr_info->data = svm->spec_ctrl; > > This is unneeded since svm->vmcb->save.spec_ctrl is saved in > svm->spec_ctrl on VMEXIT. Sure. > > > break; > > case MSR_AMD64_VIRT_SPEC_CTRL: > > if (!msr_info->host_initiated && > > @@ -2640,6 +2653,8 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, > struct msr_data *msr) > > return 1; > > > > svm->spec_ctrl = data; > > + if (static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) > > + svm->vmcb->save.spec_ctrl = data; > > And this is unneeded since svm->vmcb->save.spec_ctrl is set to > svm->spec_ctrl before VMRUN. Sure. > > > if (!data) > > break; > > > > @@ -3590,7 +3605,10 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct > kvm_vcpu *vcpu) > > * is no need to worry about the conditional branch over the wrmsr > > * being speculatively taken. > > */ > > - x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl); > > + if (static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) > > + svm->vmcb->save.spec_ctrl = svm->spec_ctrl; > > + else > > + x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl); > > > > svm_vcpu_enter_exit(vcpu, svm); > > > > @@ -3609,12 +3627,15 @@ static __no_kcsan fastpath_t > svm_vcpu_run(struct kvm_vcpu *vcpu) > > * If the L02 MSR bitmap does not intercept the MSR, then we need to > > * save it. > > */ > > - if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) > > + if (static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) > > + svm->spec_ctrl = svm->vmcb->save.spec_ctrl; > > + else if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) > > svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); > > If I understood Jim's feedback correctly, this will change to something like: > > if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) { > if (static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) > svm->spec_ctrl = svm->vmcb->save.spec_ctrl; > else > svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); > } Sure. Will take care of this in next revision. Thanks Babu > > Thanks, > Tom > > > > > reload_tss(vcpu); > > > > - x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl); > > + if (!static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) > > + x86_spec_ctrl_restore_host(svm->spec_ctrl, svm- > >virt_spec_ctrl); > > > > vcpu->arch.cr2 = svm->vmcb->save.cr2; > > vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax; > >