Re: [PATCH RESEND v2 8/8] KVM: x86/svm/pmu: Rewrite get_gp_pmc_amd() for more counters scalability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 23, 2022, Like Xu wrote:
>  static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr,
>  					     enum pmu_type type)
>  {
>  	struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);
> +	unsigned int idx;
>  
>  	if (!vcpu->kvm->arch.enable_pmu)
>  		return NULL;
>  
>  	switch (msr) {
> -	case MSR_F15H_PERF_CTL0:
> -	case MSR_F15H_PERF_CTL1:
> -	case MSR_F15H_PERF_CTL2:
> -	case MSR_F15H_PERF_CTL3:
> -	case MSR_F15H_PERF_CTL4:
> -	case MSR_F15H_PERF_CTL5:
> +	case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5:
>  		if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE))
>  			return NULL;
> -		fallthrough;

> +		idx = (unsigned int)((msr - MSR_F15H_PERF_CTL0) / 2);

> +		if ((msr == (MSR_F15H_PERF_CTL0 + 2 * idx)) !=
> +		    (type == PMU_TYPE_EVNTSEL))

This is more complicated than it needs to be.  CTLn is even, CTRn is odd (I think
I got the logic right, but the below is untested).

And this all needs a comment.


		/*
		 * Each PMU counter has a pair of CTL and CTR MSRs.  CTLn MSRs
		 * (accessed via EVNTSEL) are even, CTRn MSRs are odd.
		 */
		idx = (unsigned int)((msr - MSR_F15H_PERF_CTL0) / 2);
		if (!(msr & 0x1) != (type == PMU_TYPE_EVNTSEL))
			return NULL;

> +			return NULL;
> +		break;



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux