On Thu, Aug 01, 2024, Mingwei Zhang wrote: > From: Manali Shukla <manali.shukla@xxxxxxx> > > With the Passthrough PMU enabled, the PERF_CTLx MSRs (event selectors) are > always intercepted and the event filter checking can be directly done > inside amd_pmu_set_msr(). > > Add a check to allow writing to event selector for GP counters if and only > if the event is allowed in filter. This belongs in the patch that adds AMD support for setting pmc->eventsel_hw. E.g. reverting just this patch would leave KVM in a very broken state. And it's unnecessarily difficult to review. > Signed-off-by: Manali Shukla <manali.shukla@xxxxxxx> > Signed-off-by: Mingwei Zhang <mizhang@xxxxxxxxxx> > --- > arch/x86/kvm/svm/pmu.c | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c > index 86818da66bbe..9f3e910ee453 100644 > --- a/arch/x86/kvm/svm/pmu.c > +++ b/arch/x86/kvm/svm/pmu.c > @@ -166,6 +166,15 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > if (data != pmc->eventsel) { > pmc->eventsel = data; > if (is_passthrough_pmu_enabled(vcpu)) { > + if (!check_pmu_event_filter(pmc)) { > + /* > + * When guest request an invalid event, > + * stop the counter by clearing the > + * event selector MSR. > + */ > + pmc->eventsel_hw = 0; > + return 0; > + } > data &= ~AMD64_EVENTSEL_HOSTONLY; > pmc->eventsel_hw = data | AMD64_EVENTSEL_GUESTONLY; > } else { > -- > 2.46.0.rc1.232.g9752f9e123-goog >