Re: [RFC PATCH v3 52/58] KVM: x86/pmu/svm: Implement callback to disable MSR interception

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



+Aaron

On Thu, Aug 01, 2024, Mingwei Zhang wrote:
> +static void amd_passthrough_pmu_msrs(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
> +	struct vcpu_svm *svm = to_svm(vcpu);
> +	int msr_clear = !!(is_passthrough_pmu_enabled(vcpu));
> +	int i;
> +
> +	for (i = 0; i < min(pmu->nr_arch_gp_counters, AMD64_NUM_COUNTERS); i++) {
> +		/*
> +		 * Legacy counters are always available irrespective of any
> +		 * CPUID feature bits and when X86_FEATURE_PERFCTR_CORE is set,
> +		 * PERF_LEGACY_CTLx and PERF_LEGACY_CTRx registers are mirrored
> +		 * with PERF_CTLx and PERF_CTRx respectively.
> +		 */
> +		set_msr_interception(vcpu, svm->msrpm, MSR_K7_EVNTSEL0 + i, 0, 0);
> +		set_msr_interception(vcpu, svm->msrpm, MSR_K7_PERFCTR0 + i, msr_clear, msr_clear);
> +	}
> +
> +	for (i = 0; i < kvm_pmu_cap.num_counters_gp; i++) {
> +		/*
> +		 * PERF_CTLx registers require interception in order to clear
> +		 * HostOnly bit and set GuestOnly bit. This is to prevent the
> +		 * PERF_CTRx registers from counting before VM entry and after
> +		 * VM exit.
> +		 */
> +		set_msr_interception(vcpu, svm->msrpm, MSR_F15H_PERF_CTL + 2 * i, 0, 0);
> +
> +		/*
> +		 * Pass through counters exposed to the guest and intercept
> +		 * counters that are unexposed. Do this explicitly since this
> +		 * function may be set multiple times before vcpu runs.
> +		 */
> +		if (i >= pmu->nr_arch_gp_counters)
> +			msr_clear = 0;

Similar to my comments on the Intel side, explicitly enable interception for
MSRs that don't exist in the guest model in a separate for-loop, i.e. don't
toggle msr_clear in the middle of a loop.

I would also love to de-dup the bulk of this code, which is very doable since
the base+shift for the MSRs is going to be stashed in kvm_pmu.  All that's needed
on top is unified MSR interception logic, which is something that's been on my
wish list for some time.  SVM's inverted polarity needs to die a horrible death.

Lucky for me, Aaron is picking up that torch.

Aaron, what's your ETA on the MSR unification?  No rush, but if you think it'll
be ready in the next month or so, I'll plan on merging that first and landing
this code on top.

> +		set_msr_interception(vcpu, svm->msrpm, MSR_F15H_PERF_CTR + 2 * i, msr_clear, msr_clear);
> +	}
> +
> +	/*
> +	 * In mediated passthrough vPMU, intercept global PMU MSRs when guest
> +	 * PMU only owns a subset of counters provided in HW or its version is
> +	 * less than 2.
> +	 */
> +	if (is_passthrough_pmu_enabled(vcpu) && pmu->version > 1 &&

kvm_pmu_has_perf_global_ctrl(), no?

> +	    pmu->nr_arch_gp_counters == kvm_pmu_cap.num_counters_gp)
> +		msr_clear = 1;
> +	else
> +		msr_clear = 0;
> +
> +	set_msr_interception(vcpu, svm->msrpm, MSR_AMD64_PERF_CNTR_GLOBAL_CTL, msr_clear, msr_clear);
> +	set_msr_interception(vcpu, svm->msrpm, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS, msr_clear, msr_clear);
> +	set_msr_interception(vcpu, svm->msrpm, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, msr_clear, msr_clear);
> +	set_msr_interception(vcpu, svm->msrpm, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_SET, msr_clear, msr_clear);
> +}
> +
>  struct kvm_pmu_ops amd_pmu_ops __initdata = {
>  	.rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc,
>  	.msr_idx_to_pmc = amd_msr_idx_to_pmc,
> @@ -258,6 +312,7 @@ struct kvm_pmu_ops amd_pmu_ops __initdata = {
>  	.refresh = amd_pmu_refresh,
>  	.init = amd_pmu_init,
>  	.is_rdpmc_passthru_allowed = amd_is_rdpmc_passthru_allowed,
> +	.passthrough_pmu_msrs = amd_passthrough_pmu_msrs,
>  	.EVENTSEL_EVENT = AMD64_EVENTSEL_EVENT,
>  	.MAX_NR_GP_COUNTERS = KVM_AMD_PMC_MAX_GENERIC,
>  	.MIN_NR_GP_COUNTERS = AMD64_NUM_COUNTERS,
> -- 
> 2.46.0.rc1.232.g9752f9e123-goog
> 




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux