Hi Marc, On Sun, Nov 13, 2022 at 8:38 AM Marc Zyngier <maz@xxxxxxxxxx> wrote: > > Even when using PMUv3p5 (which implies 64bit counters), there is > no way for AArch32 to write to the top 32 bits of the counters. > The only way to influence these bits (other than by counting > events) is by writing PMCR.P==1. > > Make sure we obey the architecture and preserve the top 32 bits > on a counter update. > > Signed-off-by: Marc Zyngier <maz@xxxxxxxxxx> > --- > arch/arm64/kvm/pmu-emul.c | 35 +++++++++++++++++++++++++++-------- > 1 file changed, 27 insertions(+), 8 deletions(-) > > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c > index ea0c8411641f..419e5e0a13d0 100644 > --- a/arch/arm64/kvm/pmu-emul.c > +++ b/arch/arm64/kvm/pmu-emul.c > @@ -119,13 +119,8 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) > return counter; > } > > -/** > - * kvm_pmu_set_counter_value - set PMU counter value > - * @vcpu: The vcpu pointer > - * @select_idx: The counter index > - * @val: The counter value > - */ > -void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) > +static void kvm_pmu_set_counter(struct kvm_vcpu *vcpu, u64 select_idx, u64 val, > + bool force) > { > u64 reg; > > @@ -135,12 +130,36 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) > kvm_pmu_release_perf_event(&vcpu->arch.pmu.pmc[select_idx]); > > reg = counter_index_to_reg(select_idx); > + > + if (vcpu_mode_is_32bit(vcpu) && select_idx != ARMV8_PMU_CYCLE_IDX && > + !force) { > + /* > + * Even with PMUv3p5, AArch32 cannot write to the top > + * 32bit of the counters. The only possible course of > + * action is to use PMCR.P, which will reset them to > + * 0 (the only use of the 'force' parameter). > + */ > + val = lower_32_bits(val); > + val |= upper_32_bits(__vcpu_sys_reg(vcpu, reg)); Shouldn't the result of upper_32_bits() be shifted 32bits left before ORing (to maintain the upper 32bits of the current value) ? Thank you, Reiji > + } > + > __vcpu_sys_reg(vcpu, reg) = val; > > /* Recreate the perf event to reflect the updated sample_period */ > kvm_pmu_create_perf_event(vcpu, select_idx); > } > > +/** > + * kvm_pmu_set_counter_value - set PMU counter value > + * @vcpu: The vcpu pointer > + * @select_idx: The counter index > + * @val: The counter value > + */ > +void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val) > +{ > + kvm_pmu_set_counter(vcpu, select_idx, val, false); > +} > + > /** > * kvm_pmu_release_perf_event - remove the perf event > * @pmc: The PMU counter pointer > @@ -533,7 +552,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) > unsigned long mask = kvm_pmu_valid_counter_mask(vcpu); > mask &= ~BIT(ARMV8_PMU_CYCLE_IDX); > for_each_set_bit(i, &mask, 32) > - kvm_pmu_set_counter_value(vcpu, i, 0); > + kvm_pmu_set_counter(vcpu, i, 0, true); > } > } > > -- > 2.34.1 >