On Sun, Oct 06, 2019 at 11:46:34AM +0100, maz@xxxxxxxxxx wrote: > From: Marc Zyngier <maz@xxxxxxxxxx> > > When a counter is disabled, its value is sampled before the event > is being disabled, and the value written back in the shadow register. > > In that process, the value gets truncated to 32bit, which is adequate > for any counter but the cycle counter (defined as a 64bit counter). > > This obviously results in a corrupted counter, and things like > "perf record -e cycles" not working at all when run in a guest... > A similar, but less critical bug exists in kvm_pmu_get_counter_value. > > Make the truncation conditional on the counter not being the cycle > counter, which results in a minor code reorganisation. > > Fixes: 80f393a23be6 ("KVM: arm/arm64: Support chained PMU counters") > Cc: Andrew Murray <andrew.murray@xxxxxxx> > Reported-by: Julien Thierry <julien.thierry.kdev@xxxxxxxxx> > Signed-off-by: Marc Zyngier <maz@xxxxxxxxxx> > --- Reviewed-by: Andrew Murray <andrew.murray@xxxxxxx> > virt/kvm/arm/pmu.c | 22 ++++++++++++---------- > 1 file changed, 12 insertions(+), 10 deletions(-) > > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c > index 362a01886bab..c30c3a74fc7f 100644 > --- a/virt/kvm/arm/pmu.c > +++ b/virt/kvm/arm/pmu.c > @@ -146,8 +146,7 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) > if (kvm_pmu_pmc_is_chained(pmc) && > kvm_pmu_idx_is_high_counter(select_idx)) > counter = upper_32_bits(counter); > - > - else if (!kvm_pmu_idx_is_64bit(vcpu, select_idx)) > + else if (select_idx != ARMV8_PMU_CYCLE_IDX) > counter = lower_32_bits(counter); > > return counter; > @@ -193,7 +192,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc) > */ > static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc) > { > - u64 counter, reg; > + u64 counter, reg, val; > > pmc = kvm_pmu_get_canonical_pmc(pmc); > if (!pmc->perf_event) > @@ -201,16 +200,19 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc) > > counter = kvm_pmu_get_pair_counter_value(vcpu, pmc); > > - if (kvm_pmu_pmc_is_chained(pmc)) { > - reg = PMEVCNTR0_EL0 + pmc->idx; > - __vcpu_sys_reg(vcpu, reg) = lower_32_bits(counter); > - __vcpu_sys_reg(vcpu, reg + 1) = upper_32_bits(counter); > + if (pmc->idx == ARMV8_PMU_CYCLE_IDX) { > + reg = PMCCNTR_EL0; > + val = counter; > } else { > - reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX) > - ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx; > - __vcpu_sys_reg(vcpu, reg) = lower_32_bits(counter); > + reg = PMEVCNTR0_EL0 + pmc->idx; > + val = lower_32_bits(counter); > } > > + __vcpu_sys_reg(vcpu, reg) = val; > + > + if (kvm_pmu_pmc_is_chained(pmc)) > + __vcpu_sys_reg(vcpu, reg + 1) = upper_32_bits(counter); > + > kvm_pmu_release_perf_event(pmc); > } > > -- > 2.20.1 > _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm