On Tue, May 23, 2023, Roman Kagan wrote: > On Tue, May 23, 2023 at 08:40:53PM +0800, Like Xu wrote: > > On 4/5/2023 8:00 pm, Roman Kagan wrote: > > > Performance counters are defined to have width less than 64 bits. The > > > vPMU code maintains the counters in u64 variables but assumes the value > > > to fit within the defined width. However, for Intel non-full-width > > > counters (MSR_IA32_PERFCTRx) the value receieved from the guest is > > > truncated to 32 bits and then sign-extended to full 64 bits. If a > > > negative value is set, it's sign-extended to 64 bits, but then in > > > kvm_pmu_incr_counter() it's incremented, truncated, and compared to the > > > previous value for overflow detection. > > > > Thanks for reporting this issue. An easier-to-understand fix could be: > > > > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c > > index e17be25de6ca..51e75f121234 100644 > > --- a/arch/x86/kvm/pmu.c > > +++ b/arch/x86/kvm/pmu.c > > @@ -718,7 +718,7 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu) > > > > static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) > > { > > - pmc->prev_counter = pmc->counter; > > + pmc->prev_counter = pmc->counter & pmc_bitmask(pmc); > > pmc->counter = (pmc->counter + 1) & pmc_bitmask(pmc); > > kvm_pmu_request_counter_reprogram(pmc); > > } > > > > Considering that the pmu code uses pmc_bitmask(pmc) everywhere to wrap > > around, I would prefer to use this fix above first and then do a more thorough > > cleanup based on your below diff. What do you think ? > > I did exactly this at first. However, it felt more natural and easier > to reason about and less error-prone going forward, to maintain the > invariant that pmc->counter always fits in the assumed width. Agreed, KVM shouldn't store information that's not supposed to exist.