On 1/31/2018 1:37 PM, KarimAllah Ahmed wrote: > From: Ashok Raj <ashok.raj@xxxxxxxxx> > > The Indirect Branch Predictor Barrier (IBPB) is an indirect branch > control mechanism. It keeps earlier branches from influencing > later ones. > > Unlike IBRS and STIBP, IBPB does not define a new mode of operation. > It's a command that ensures predicted branch targets aren't used after > the barrier. Although IBRS and IBPB are enumerated by the same CPUID > enumeration, IBPB is very different. > > IBPB helps mitigate against three potential attacks: > > * Mitigate guests from being attacked by other guests. > - This is addressed by issing IBPB when we do a guest switch. > > * Mitigate attacks from guest/ring3->host/ring3. > These would require a IBPB during context switch in host, or after > VMEXIT. The host process has two ways to mitigate > - Either it can be compiled with retpoline > - If its going through context switch, and has set !dumpable then > there is a IBPB in that path. > (Tim's patch: https://patchwork.kernel.org/patch/10192871) > - The case where after a VMEXIT you return back to Qemu might make > Qemu attackable from guest when Qemu isn't compiled with retpoline. > There are issues reported when doing IBPB on every VMEXIT that resulted > in some tsc calibration woes in guest. > > * Mitigate guest/ring0->host/ring0 attacks. > When host kernel is using retpoline it is safe against these attacks. > If host kernel isn't using retpoline we might need to do a IBPB flush on > every VMEXIT. > > Even when using retpoline for indirect calls, in certain conditions 'ret' > can use the BTB on Skylake-era CPUs. There are other mitigations > available like RSB stuffing/clearing. > > * IBPB is issued only for SVM during svm_free_vcpu(). > VMX has a vmclear and SVM doesn't. Follow discussion here: > https://lkml.org/lkml/2018/1/15/146 > > Please refer to the following spec for more details on the enumeration > and control. > > Refer here to get documentation about mitigations. > > https://software.intel.com/en-us/side-channel-security-support > > [peterz: rebase and changelog rewrite] > [karahmed: - rebase > - vmx: expose PRED_CMD if guest has it in CPUID > - svm: only pass through IBPB if guest has it in CPUID > - vmx: support !cpu_has_vmx_msr_bitmap()] > - vmx: support nested] > [dwmw2: Expose CPUID bit too (AMD IBPB only for now as we lack IBRS) > PRED_CMD is a write-only MSR] > > Cc: Asit Mallick <asit.k.mallick@xxxxxxxxx> > Cc: Dave Hansen <dave.hansen@xxxxxxxxx> > Cc: Arjan Van De Ven <arjan.van.de.ven@xxxxxxxxx> > Cc: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> > Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> > Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> > Cc: Andi Kleen <ak@xxxxxxxxxxxxxxx> > Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > Cc: Dan Williams <dan.j.williams@xxxxxxxxx> > Cc: Jun Nakajima <jun.nakajima@xxxxxxxxx> > Cc: Andy Lutomirski <luto@xxxxxxxxxx> > Cc: Greg KH <gregkh@xxxxxxxxxxxxxxxxxxx> > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> > Signed-off-by: Ashok Raj <ashok.raj@xxxxxxxxx> > Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> > Link: http://lkml.kernel.org/r/1515720739-43819-6-git-send-email-ashok.raj@xxxxxxxxx > Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxxxx> > Signed-off-by: KarimAllah Ahmed <karahmed@xxxxxxxxx> > > v5: > - Use MSR_TYPE_W instead of MSR_TYPE_R for the MSR. > - Always merge the bitmaps unconditionally. > - Add PRED_CMD to direct_access_msrs. > - Also check for X86_FEATURE_SPEC_CTRL for the msr reads/writes > - rewrite the commit message (from ashok.raj@) > --- > arch/x86/kvm/cpuid.c | 11 ++++++++++- > arch/x86/kvm/svm.c | 28 ++++++++++++++++++++++++++++ > arch/x86/kvm/vmx.c | 29 +++++++++++++++++++++++++---- > 3 files changed, 63 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c > index c0eb337..033004d 100644 > --- a/arch/x86/kvm/cpuid.c > +++ b/arch/x86/kvm/cpuid.c > @@ -365,6 +365,10 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function, > F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) | > 0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM); > > + /* cpuid 0x80000008.ebx */ > + const u32 kvm_cpuid_8000_0008_ebx_x86_features = > + F(IBPB); > + > /* cpuid 0xC0000001.edx */ > const u32 kvm_cpuid_C000_0001_edx_x86_features = > F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) | > @@ -625,7 +629,12 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function, > if (!g_phys_as) > g_phys_as = phys_as; > entry->eax = g_phys_as | (virt_as << 8); > - entry->ebx = entry->edx = 0; > + entry->edx = 0; > + /* IBPB isn't necessarily present in hardware cpuid */ > + if (boot_cpu_has(X86_FEATURE_IBPB)) > + entry->ebx |= F(IBPB); > + entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features; > + cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX); > break; > } > case 0x80000019: > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index f40d0da..bfbb7b9 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -250,6 +250,7 @@ static const struct svm_direct_access_msrs { > { .index = MSR_SYSCALL_MASK, .always = true }, > #endif > { .index = MSR_IA32_LASTBRANCHFROMIP, .always = false }, > + { .index = MSR_IA32_PRED_CMD, .always = false }, Just a nit, but could you not split up the two LASTBRANCH related MSRs and instead put this either before the LASTBRANCHFROMIP or at the end of the array? (Ditto for SPEC_CTRL in patch 5). Thanks, Tom > { .index = MSR_IA32_LASTBRANCHTOIP, .always = false }, > { .index = MSR_IA32_LASTINTFROMIP, .always = false }, > { .index = MSR_IA32_LASTINTTOIP, .always = false }, > @@ -529,6 +530,7 @@ struct svm_cpu_data { > struct kvm_ldttss_desc *tss_desc; > > struct page *save_area; > + struct vmcb *current_vmcb; > }; > > static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data); > @@ -1703,11 +1705,17 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) > __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); > kvm_vcpu_uninit(vcpu); > kmem_cache_free(kvm_vcpu_cache, svm); > + /* > + * The vmcb page can be recycled, causing a false negative in > + * svm_vcpu_load(). So do a full IBPB now. > + */ > + indirect_branch_prediction_barrier(); > } > > static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > { > struct vcpu_svm *svm = to_svm(vcpu); > + struct svm_cpu_data *sd = per_cpu(svm_data, cpu); > int i; > > if (unlikely(cpu != vcpu->cpu)) { > @@ -1736,6 +1744,10 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > if (static_cpu_has(X86_FEATURE_RDTSCP)) > wrmsrl(MSR_TSC_AUX, svm->tsc_aux); > > + if (sd->current_vmcb != svm->vmcb) { > + sd->current_vmcb = svm->vmcb; > + indirect_branch_prediction_barrier(); > + } > avic_vcpu_load(vcpu, cpu); > } > > @@ -3684,6 +3696,22 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) > case MSR_IA32_TSC: > kvm_write_tsc(vcpu, msr); > break; > + case MSR_IA32_PRED_CMD: > + if (!msr->host_initiated && > + !guest_cpuid_has(vcpu, X86_FEATURE_IBPB)) > + return 1; > + > + if (data & ~PRED_CMD_IBPB) > + return 1; > + > + if (!data) > + break; > + > + wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); > + if (is_guest_mode(vcpu)) > + break; > + set_msr_interception(svm->msrpm, MSR_IA32_PRED_CMD, 0, 1); > + break; > case MSR_STAR: > svm->vmcb->save.star = data; > break; > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index d46a61b..2e4e8af 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -2285,6 +2285,7 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) { > per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs; > vmcs_load(vmx->loaded_vmcs->vmcs); > + indirect_branch_prediction_barrier(); > } > > if (!already_loaded) { > @@ -3342,6 +3343,26 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > case MSR_IA32_TSC: > kvm_write_tsc(vcpu, msr_info); > break; > + case MSR_IA32_PRED_CMD: > + if (!msr_info->host_initiated && > + !guest_cpuid_has(vcpu, X86_FEATURE_IBPB) && > + !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL)) > + return 1; > + > + if (data & ~PRED_CMD_IBPB) > + return 1; > + > + if (!data) > + break; > + > + wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); > + > + if (is_guest_mode(vcpu)) > + break; > + > + vmx_disable_intercept_for_msr(vmx->vmcs01.msr_bitmap, MSR_IA32_PRED_CMD, > + MSR_TYPE_W); > + break; > case MSR_IA32_CR_PAT: > if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) { > if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data)) > @@ -10045,10 +10066,6 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu, > unsigned long *msr_bitmap_l1; > unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap; > > - /* This shortcut is ok because we support only x2APIC MSRs so far. */ > - if (!nested_cpu_has_virt_x2apic_mode(vmcs12)) > - return false; > - > page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap); > if (is_error_page(page)) > return false; > @@ -10056,6 +10073,10 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu, > > memset(msr_bitmap_l0, 0xff, PAGE_SIZE); > > + nested_vmx_disable_intercept_for_msr(msr_bitmap_l1, msr_bitmap_l0, > + MSR_IA32_PRED_CMD, > + MSR_TYPE_W); > + > if (nested_cpu_has_virt_x2apic_mode(vmcs12)) { > if (nested_cpu_has_apic_reg_virt(vmcs12)) > for (msr = 0x800; msr <= 0x8ff; msr++) >