Re: [PATCH v3 1/5] kvm: x86/pmu: Introduce masked events to the pmu event filter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 3, 2022 at 6:36 PM Aaron Lewis <aaronlewis@xxxxxxxxxx> wrote:
>
> On Tue, Aug 2, 2022 at 5:19 PM Jim Mattson <jmattson@xxxxxxxxxx> wrote:
> >
> > On Fri, Jul 8, 2022 at 6:17 PM Aaron Lewis <aaronlewis@xxxxxxxxxx> wrote:
> > below. Maybe it's too late to avoid confusion, but I'd suggest
> > referring to the 64-bit object as a "PMU event filter entry," or
> > something like that.
> >
>
> What about "filtered event"?

Filter event?

> > >
> > > +static bool allowed_by_masked_events(struct kvm_pmu_event_filter *filter,
> > > +                                    u64 eventsel)
> > > +{
> > > +       if (is_filtered(filter, eventsel, /*invert=*/false))
> > > +               if (!is_filtered(filter, eventsel, /*invert=*/true))
> >
> > Perhaps you could eliminate the ugly parameter comments if you
> > maintained the "normal" and inverted entries in separate lists. It
> > might also speed things up for the common case, assuming that inverted
> > entries are uncommon.
>
> Is it really that ugly?  I thought it made it more clear, so you don't
> have to jump to the function to see what the bool does.
>
> I can see an argument for walking a shorter list for inverted entries
> in the common case.
>
> To do this I'll likely make an internal copy of the struct like
> 'struct kvm_x86_msr_filter' to avoid the flexible array at the end of
> the pmu event filter, and to not mess with the current ABI.  I'll just
> have two lists at the end: one for regular entries and one for
> inverted entries.  If there's a better approach I'm all ears.

Right. I'm not suggesting that you change the ABI. If you move the
'invert' bit to a higher bit position than any event select bit, then
by including the invert bit in your sort key, the sorted list will
naturally be partitioned into positive and negative entries.

> >
> > > +                       return filter->action == KVM_PMU_EVENT_ALLOW;
> > > +
> > > +       return filter->action == KVM_PMU_EVENT_DENY;
> > > +}
> > > +
> > > +static bool allowed_by_default_events(struct kvm_pmu_event_filter *filter,
> > > +                                   u64 eventsel)
> > > +{
> > > +       u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB;
> > > +
> > > +       if (bsearch(&key, filter->events, filter->nevents,
> > > +                   sizeof(u64), cmp_u64))
> > > +               return filter->action == KVM_PMU_EVENT_ALLOW;
> > > +
> > > +       return filter->action == KVM_PMU_EVENT_DENY;
> > > +}
> > > +
> > >  void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
> > >  {
> > >         u64 config;
> > > @@ -226,14 +318,11 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
> > >
> > >         filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu);
> > >         if (filter) {
> > > -               __u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB;
> > > -
> > > -               if (bsearch(&key, filter->events, filter->nevents,
> > > -                           sizeof(__u64), cmp_u64))
> > > -                       allow_event = filter->action == KVM_PMU_EVENT_ALLOW;
> > > -               else
> > > -                       allow_event = filter->action == KVM_PMU_EVENT_DENY;
> > > +               allow_event = (filter->flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS) ?
> > > +                       allowed_by_masked_events(filter, eventsel) :
> > > +                       allowed_by_default_events(filter, eventsel);
> >
> > If you converted all of the legacy filters into masked filters by
> > simply setting the mask field to '0xff' when copying from userspace,
> > you wouldn't need the complexity of two different matching algorithms.
>
> Agreed that it will simplify the code, but it will make the legacy
> case slower because instead of being able to directly search for a
> filtered event, we will have to walk the list of matching eventsel's
> events looking for a match.

I think the simplicity of having one common abstraction outweighs the
performance penalty for legacy filter lists with a large number of
unit masks for a particular event select. However, I could be
convinced otherwise with a practical example.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux