From: Like Xu <likexu@xxxxxxxxxxx> In either case, the counters will point to fixed or gp pmc array, and taking advantage of the C pointer, it's reasonable to use an almost known mem load operation directly without disturbing the branch predictor. Signed-off-by: Like Xu <likexu@xxxxxxxxxxx> --- arch/x86/kvm/vmx/pmu_intel.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index e5cec07ca8d9..28b0a784f6e9 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -142,7 +142,7 @@ static struct kvm_pmc *intel_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu, } if (idx >= num_counters) return NULL; - *mask &= pmu->counter_bitmask[fixed ? KVM_PMC_FIXED : KVM_PMC_GP]; + *mask &= pmu->counter_bitmask[counters->type]; return &counters[array_index_nospec(idx, num_counters)]; } -- 2.38.1