Jim Mattson <jmattson@xxxxxxxxxx> writes: > KVM can only virtualize as many PMCs as the host supports. > > Limit the number of generic counters and fixed counters to the number > of corresponding counters supported on the host, rather than to > INTEL_PMC_MAX_GENERIC and INTEL_PMC_MAX_FIXED, respectively. > > Note that INTEL_PMC_MAX_GENERIC is currently 32, which exceeds the 18 > contiguous MSR indices reserved by Intel for event selectors. Since > the existing code relies on a contiguous range of MSR indices for > event selectors, it can't possibly work for more than 18 general > purpose counters. Should we also trim msrs_to_save[] by removing impossible entries (18-31) then? > > Fixes: f5132b01386b5a ("KVM: Expose a version 2 architectural PMU to a guests") > Signed-off-by: Jim Mattson <jmattson@xxxxxxxxxx> > Reviewed-by: Marc Orr <marcorr@xxxxxxxxxx> > --- > arch/x86/kvm/vmx/pmu_intel.c | 7 +++++-- > 1 file changed, 5 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c > index 4dea0e0e7e392..3e9c059099e94 100644 > --- a/arch/x86/kvm/vmx/pmu_intel.c > +++ b/arch/x86/kvm/vmx/pmu_intel.c > @@ -262,6 +262,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > static void intel_pmu_refresh(struct kvm_vcpu *vcpu) > { > struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); > + struct x86_pmu_capability x86_pmu; > struct kvm_cpuid_entry2 *entry; > union cpuid10_eax eax; > union cpuid10_edx edx; > @@ -283,8 +284,10 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) > if (!pmu->version) > return; > > + perf_get_x86_pmu_capability(&x86_pmu); > + > pmu->nr_arch_gp_counters = min_t(int, eax.split.num_counters, > - INTEL_PMC_MAX_GENERIC); > + x86_pmu.num_counters_gp); This is a theoretical fix which is orthogonal to the issue with state_test I reported on Friday, right? Because in my case 'eax.split.num_counters' is already 8. > pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << eax.split.bit_width) - 1; > pmu->available_event_types = ~entry->ebx & > ((1ull << eax.split.mask_length) - 1); > @@ -294,7 +297,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) > } else { > pmu->nr_arch_fixed_counters = > min_t(int, edx.split.num_counters_fixed, > - INTEL_PMC_MAX_FIXED); > + x86_pmu.num_counters_fixed); > pmu->counter_bitmask[KVM_PMC_FIXED] = > ((u64)1 << edx.split.bit_width_fixed) - 1; > } -- Vitaly