On 11/21/2024 5:38 AM, Sean Christopherson wrote: > On Thu, Aug 01, 2024, Mingwei Zhang wrote: >> From: Sandipan Das <sandipan.das@xxxxxxx> >> >> On AMD platforms, there is no way to restore PerfCntrGlobalCtl at >> VM-Entry or clear it at VM-Exit. Since the register states will be >> restored before entering and saved after exiting guest context, the >> counters can keep ticking and even overflow leading to chaos while >> still in host context. >> >> To avoid this, the PERF_CTLx MSRs (event selectors) are always >> intercepted. KVM will always set the GuestOnly bit and clear the >> HostOnly bit so that the counters run only in guest context even if >> their enable bits are set. Intercepting these MSRs is also necessary >> for guest event filtering. >> >> Signed-off-by: Sandipan Das <sandipan.das@xxxxxxx> >> Signed-off-by: Mingwei Zhang <mizhang@xxxxxxxxxx> >> --- >> arch/x86/kvm/svm/pmu.c | 7 ++++++- >> 1 file changed, 6 insertions(+), 1 deletion(-) >> >> diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c >> index cc03c3e9941f..2b7cc7616162 100644 >> --- a/arch/x86/kvm/svm/pmu.c >> +++ b/arch/x86/kvm/svm/pmu.c >> @@ -165,7 +165,12 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) >> data &= ~pmu->reserved_bits; >> if (data != pmc->eventsel) { >> pmc->eventsel = data; >> - kvm_pmu_request_counter_reprogram(pmc); >> + if (is_passthrough_pmu_enabled(vcpu)) { >> + data &= ~AMD64_EVENTSEL_HOSTONLY; >> + pmc->eventsel_hw = data | AMD64_EVENTSEL_GUESTONLY; > Do both in a single statment, i.e. > > pmc->eventsel_hw = (data & ~AMD64_EVENTSEL_HOSTONLY) | > AMD64_EVENTSEL_GUESTONLY; > > Though per my earlier comments, this likely needs to end up in reprogram_counter(). It looks we need to add a PMU callback and call it from reprogram_counter(). > >> + } else { >> + kvm_pmu_request_counter_reprogram(pmc); >> + } >> } >> return 0; >> } >> -- >> 2.46.0.rc1.232.g9752f9e123-goog >>