On Wed, Nov 8, 2023 at 6:39 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > On Tue, Nov 07, 2023, Jim Mattson wrote: > > On Tue, Nov 7, 2023 at 4:31 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > > > > Set the eventsel for all fixed counters during PMU initialization, the > > > eventsel is hardcoded and consumed if and only if the counter is supported, > > > i.e. there is no reason to redo the setup every time the PMU is refreshed. > > > > > > Configuring all KVM-supported fixed counter also eliminates a potential > > > pitfall if/when KVM supports discontiguous fixed counters, in which case > > > configuring only nr_arch_fixed_counters will be insufficient (ignoring the > > > fact that KVM will need many other changes to support discontiguous fixed > > > counters). > > > > > > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> > > > --- > > > arch/x86/kvm/vmx/pmu_intel.c | 14 ++++---------- > > > 1 file changed, 4 insertions(+), 10 deletions(-) > > > > > > diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c > > > index c4f2c6a268e7..5fc5a62af428 100644 > > > --- a/arch/x86/kvm/vmx/pmu_intel.c > > > +++ b/arch/x86/kvm/vmx/pmu_intel.c > > > @@ -409,7 +409,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > > > * Note, reference cycles is counted using a perf-defined "psuedo-encoding", > > > * as there is no architectural general purpose encoding for reference cycles. > > > */ > > > -static void setup_fixed_pmc_eventsel(struct kvm_pmu *pmu) > > > +static u64 intel_get_fixed_pmc_eventsel(int index) > > > { > > > const struct { > > > u8 eventsel; > > > @@ -419,17 +419,11 @@ static void setup_fixed_pmc_eventsel(struct kvm_pmu *pmu) > > > [1] = { 0x3c, 0x00 }, /* CPU Cycles/ PERF_COUNT_HW_CPU_CYCLES. */ > > > [2] = { 0x00, 0x03 }, /* Reference Cycles / PERF_COUNT_HW_REF_CPU_CYCLES*/ > > > }; > > > - int i; > > > > > > BUILD_BUG_ON(ARRAY_SIZE(fixed_pmc_events) != KVM_PMC_MAX_FIXED); > > > > > > - for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { > > > - int index = array_index_nospec(i, KVM_PMC_MAX_FIXED); > > > - struct kvm_pmc *pmc = &pmu->fixed_counters[index]; > > > - > > > - pmc->eventsel = (fixed_pmc_events[index].unit_mask << 8) | > > > - fixed_pmc_events[index].eventsel; > > > - } > > > + return (fixed_pmc_events[index].unit_mask << 8) | > > > + fixed_pmc_events[index].eventsel; > > > > Can I just say that it's really confusing that the value returned by > > intel_get_fixed_pmc_eventsel() is the concatenation of an 8-bit "unit > > mask" and an 8-bit "eventsel"? > > Heh, blame the SDM for having an "event select" field in "event select" MSRs. > > Is this better? > > const struct { > u8 event; > u8 unit_mask; > } fixed_pmc_events[] = { > [0] = { 0xc0, 0x00 }, /* Instruction Retired / PERF_COUNT_HW_INSTRUCTIONS. */ > [1] = { 0x3c, 0x00 }, /* CPU Cycles/ PERF_COUNT_HW_CPU_CYCLES. */ > [2] = { 0x00, 0x03 }, /* Reference Cycles / PERF_COUNT_HW_REF_CPU_CYCLES*/ > }; Better. Thank you.