On Tue, Mar 05, 2024, Like Xu wrote: > On 5/3/2024 3:46 am, Sean Christopherson wrote: > > > > > > --- > > > > > > arch/x86/kvm/svm/pmu.c | 1 + > > > > > > 1 file changed, 1 insertion(+) > > > > > > > > > > > > diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c > > > > > > index b6a7ad4d6914..14709c564d6a 100644 > > > > > > --- a/arch/x86/kvm/svm/pmu.c > > > > > > +++ b/arch/x86/kvm/svm/pmu.c > > > > > > @@ -205,6 +205,7 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu) > > > > > > if (pmu->version > 1) { > > > > > > pmu->global_ctrl_mask = ~((1ull << pmu->nr_arch_gp_counters) - 1); > > > > > > pmu->global_status_mask = pmu->global_ctrl_mask; > > > > > > + pmu->global_ctrl = ~pmu->global_ctrl_mask; > > > > > > It seems to be more easily understand to calculate global_ctrl firstly and > > > then derive the globol_ctrl_mask (negative logic). > > > > Hrm, I'm torn. On one hand, awful name aside (global_ctrl_mask should really be > > something like global_ctrl_rsvd_bits), the computation of the reserved bits should > > come from the capabilities of the PMU, not from the RESET value. > > > > On the other hand, setting _all_ non-reserved bits will likely do the wrong thing > > if AMD ever adds bits in PerfCntGlobalCtl that aren't tied to general purpose > > counters. But, that's a future theoretical problem, so I'm inclined to vote for > > Sandipan's approach. > > I suspect that Intel hardware also has this behaviour [*] although guest > kernels using Intel pmu version 1 are pretty much non-existent. > > [*] Table 10-1. IA-32 and Intel® 64 Processor States Following Power-up, > Reset, or INIT (Contd.) Aha! Nice. To save people lookups, the table says: IA32_PERF_GLOBAL_CTRL: Sets bits n-1:0 and clears the upper bits. and Where "n" is the number of general-purpose counters available in the processor. Which means that (a) KVM can handle this in common code and (b) we can dodge the whole reserved bits chicken-and-egg problem since global_ctrl *can't* be derived from global_ctrl_mask. This? (compile tested only) --- From: Sean Christopherson <seanjc@xxxxxxxxxx> Date: Tue, 5 Mar 2024 09:02:26 -0800 Subject: [PATCH] KVM: x86/pmu: Set enable bits for GP counters in PERF_GLOBAL_CTRL at "RESET" Set the enable bits for general purpose counters in IA32_PERF_GLOBAL_CTRL when refreshing the PMU to emulate the MSR's architecturally defined post-RESET behavior. Per Intel's SDM: IA32_PERF_GLOBAL_CTRL: Sets bits n-1:0 and clears the upper bits. and Where "n" is the number of general-purpose counters available in the processor. This is a long-standing bug that was recently exposed when KVM added supported for AMD's PerfMonV2, i.e. when KVM started exposing a vPMU with PERF_GLOBAL_CTRL to guest software that only knew how to program v1 PMUs (that don't support PERF_GLOBAL_CTRL). Failure to emulate the post-RESET behavior results in such guests unknowingly leaving all general purpose counters globally disabled (the entire reason the post-RESET value sets the GP counter enable bits is to maintain backwards compatibility). The bug has likely gone unnoticed because PERF_GLOBAL_CTRL has been supported on Intel CPUs for as long as KVM has existed, i.e. hardly anyone is running guest software that isn't aware of PERF_GLOBAL_CTRL on Intel PMUs. Note, kvm_pmu_refresh() can be invoked multiple times, i.e. it's not a "pure" RESET flow. But it can only be called prior to the first KVM_RUN, i.e. the guest will only ever observe the final value. Reported-by: Reported-by: Babu Moger <babu.moger@xxxxxxx> Cc: Like Xu <like.xu.linux@xxxxxxxxx> Cc: Mingwei Zhang <mizhang@xxxxxxxxxx> Cc: Dapeng Mi <dapeng1.mi@xxxxxxxxxxxxxxx> Cc: Sandipan Das <sandipan.das@xxxxxxx> Cc: stable@xxxxxxxxxxxxxxx Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> --- arch/x86/kvm/pmu.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 87cc6c8809ad..f61ce26aeb90 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -741,6 +741,8 @@ static void kvm_pmu_reset(struct kvm_vcpu *vcpu) */ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) { + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + if (KVM_BUG_ON(kvm_vcpu_has_run(vcpu), vcpu->kvm)) return; @@ -750,8 +752,18 @@ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) */ kvm_pmu_reset(vcpu); - bitmap_zero(vcpu_to_pmu(vcpu)->all_valid_pmc_idx, X86_PMC_IDX_MAX); + bitmap_zero(pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX); static_call(kvm_x86_pmu_refresh)(vcpu); + + /* + * At RESET, both Intel and AMD CPUs set all enable bits for general + * purpose counters in IA32_PERF_GLOBAL_CTRL (so that software that + * was written for v1 PMUs don't unknowingly leave GP counters disabled + * in the global controls). Emulate that behavior when refreshing the + * PMU so that userspace doesn't need to manually set PERF_GLOBAL_CTRL. + */ + if (kvm_pmu_has_perf_global_ctrl(pmu)) + pmu->global_ctrl = GENMASK_ULL(pmu->nr_arch_gp_counters - 1, 0); } void kvm_pmu_init(struct kvm_vcpu *vcpu) base-commit: 1d7ae977d219e68698fdb9bed1049dc561038aa1 --