On Thu, Nov 24, 2022, Yang Weijiang wrote: > Add Arch LBR feature bit in CPU cap-mask to expose the feature. > Only max LBR depth is supported for guest, and it's consistent > with host Arch LBR settings. > > Co-developed-by: Like Xu <like.xu@xxxxxxxxxxxxxxx> > Signed-off-by: Like Xu <like.xu@xxxxxxxxxxxxxxx> > Signed-off-by: Yang Weijiang <weijiang.yang@xxxxxxxxx> > Reviewed-by: Kan Liang <kan.liang@xxxxxxxxxxxxxxx> > --- > arch/x86/kvm/cpuid.c | 36 +++++++++++++++++++++++++++++++++++- > 1 file changed, 35 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c > index 85e3df6217af..60b3c591d462 100644 > --- a/arch/x86/kvm/cpuid.c > +++ b/arch/x86/kvm/cpuid.c > @@ -134,6 +134,19 @@ static int kvm_check_cpuid(struct kvm_vcpu *vcpu, > if (vaddr_bits != 48 && vaddr_bits != 57 && vaddr_bits != 0) > return -EINVAL; > } > + if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR)) { > + best = cpuid_entry2_find(entries, nent, 0x1c, 0); > + if (best) { > + unsigned int eax, ebx, ecx, edx; > + > + /* Reject user-space CPUID if depth is different from host's.*/ > + cpuid_count(0x1c, 0, &eax, &ebx, &ecx, &edx); > + > + if ((eax & 0xff) && > + (best->eax & 0xff) != BIT(fls(eax & 0xff) - 1)) > + return -EINVAL; > + } > + } Drop this. While I think everyone agrees that KVM's CPUID uAPI sucks, the status quo is to let userspace shoot itself in the foot. I.e. disallow enabling LBRs with a "bad" config, but don't reject the ioctl(). > > /* > * Exposing dynamic xfeatures to the guest requires additional > @@ -652,7 +665,7 @@ void kvm_set_cpu_caps(void) > F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) | > F(MD_CLEAR) | F(AVX512_VP2INTERSECT) | F(FSRM) | > F(SERIALIZE) | F(TSXLDTRK) | F(AVX512_FP16) | > - F(AMX_TILE) | F(AMX_INT8) | F(AMX_BF16) > + F(AMX_TILE) | F(AMX_INT8) | F(AMX_BF16) | F(ARCH_LBR) As mentioned earlier, omit this and make it opt-in. > ); > > /* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */ > @@ -1074,6 +1087,27 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) > goto out; > } > break; > + /* Architectural LBR */ > + case 0x1c: { > + u32 lbr_depth_mask = entry->eax & 0xff; > + > + if (!lbr_depth_mask || > + !kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR)) { > + entry->eax = entry->ebx = entry->ecx = entry->edx = 0; > + break; > + } > + /* > + * KVM only exposes the maximum supported depth, which is the > + * fixed value used on the host side. > + * KVM doesn't allow VMM userspace to adjust LBR depth because > + * guest LBR emulation depends on the configuration of host LBR > + * driver. > + */ > + lbr_depth_mask = BIT((fls(lbr_depth_mask) - 1)); C'mon. More unnecessary dependencies on perf using the max depth. > + entry->eax &= ~0xff; > + entry->eax |= lbr_depth_mask; > + break; > + } > /* Intel AMX TILE */ > case 0x1d: > if (!kvm_cpu_cap_has(X86_FEATURE_AMX_TILE)) { > -- > 2.27.0 >