Re: [PATCH v3 1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 7, 2023 at 7:19 AM Like Xu <like.xu.linux@xxxxxxxxx> wrote:
>
> On 2023/3/7 22:13, Aaron Lewis wrote:
>
> > When counting "Instructions Retired" (0xc0) in a guest, KVM will
> > occasionally increment the PMU counter regardless of if that event is
> > being filtered. This is because some PMU events are incremented via
> > kvm_pmu_trigger_event(), which doesn't know about the event filter. Add
> > the event filter to kvm_pmu_trigger_event(), so events that are
> > disallowed do not increment their counters.
> It would be nice to have:
>
>      Reported-by: Jinrong Liang <cloudliang@xxxxxxxxxxx>
>
> , since he also found the same issue.
>

Sure, I can add that.

> > Fixes: 9cd803d496e7 ("KVM: x86: Update vPMCs when retiring instructions")
> > Signed-off-by: Aaron Lewis <aaronlewis@xxxxxxxxxx>
>
> Reviewed-by: Like Xu <likexu@xxxxxxxxxxx>
>
> > ---
> >   arch/x86/kvm/pmu.c | 13 ++++++++-----
> >   1 file changed, 8 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> > index 612e6c70ce2e..9914a9027c60 100644
> > --- a/arch/x86/kvm/pmu.c
> > +++ b/arch/x86/kvm/pmu.c
> > @@ -400,6 +400,12 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc)
> >       return is_fixed_event_allowed(filter, pmc->idx);
> >   }
> >
> > +static bool event_is_allowed(struct kvm_pmc *pmc)
>
> Nit, an inline event_is_allowed() here might be better.

I purposely didn't inline this because Sean generally discourages its
use and has commented in several reviews to not use 'inline' and
instead leave it up to the compiler to decide, unless using
__always_inline.  I think the sentiment is either use the strong hint
or don't use it at all.  This seems like an example of where the
compiler can decide, and a strong hint isn't needed.

>
> > +{
> > +     return pmc_is_enabled(pmc) && pmc_speculative_in_use(pmc) &&
> > +            check_pmu_event_filter(pmc);
> > +}
> > +
> >   static void reprogram_counter(struct kvm_pmc *pmc)
> >   {
> >       struct kvm_pmu *pmu = pmc_to_pmu(pmc);
> > @@ -409,10 +415,7 @@ static void reprogram_counter(struct kvm_pmc *pmc)
> >
> >       pmc_pause_counter(pmc);
> >
> > -     if (!pmc_speculative_in_use(pmc) || !pmc_is_enabled(pmc))
> > -             goto reprogram_complete;
> > -
> > -     if (!check_pmu_event_filter(pmc))
> > +     if (!event_is_allowed(pmc))
> >               goto reprogram_complete;
> >
> >       if (pmc->counter < pmc->prev_counter)
> > @@ -684,7 +687,7 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id)
> >       for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) {
> >               pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i);
> >
> > -             if (!pmc || !pmc_is_enabled(pmc) || !pmc_speculative_in_use(pmc))
> > +             if (!pmc || !event_is_allowed(pmc))
> >                       continue;
> >
> >               /* Ignore checks for edge detect, pin control, invert and CMASK bits */




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux