Patch "KVM: x86/pmu: Truncate counter value to allowed width on write" has been added to the 6.1-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    KVM: x86/pmu: Truncate counter value to allowed width on write

to the 6.1-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     kvm-x86-pmu-truncate-counter-value-to-allowed-width-.patch
and it can be found in the queue-6.1 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 76da8887abaa6aa6d00484823d5b4e2a13b3c89d
Author: Roman Kagan <rkagan@xxxxxxxxx>
Date:   Thu May 4 14:00:42 2023 +0200

    KVM: x86/pmu: Truncate counter value to allowed width on write
    
    [ Upstream commit b29a2acd36dd7a33c63f260df738fb96baa3d4f8 ]
    
    Performance counters are defined to have width less than 64 bits.  The
    vPMU code maintains the counters in u64 variables but assumes the value
    to fit within the defined width.  However, for Intel non-full-width
    counters (MSR_IA32_PERFCTRx) the value receieved from the guest is
    truncated to 32 bits and then sign-extended to full 64 bits.  If a
    negative value is set, it's sign-extended to 64 bits, but then in
    kvm_pmu_incr_counter() it's incremented, truncated, and compared to the
    previous value for overflow detection.
    
    That previous value is not truncated, so it always evaluates bigger than
    the truncated new one, and a PMI is injected.  If the PMI handler writes
    a negative counter value itself, the vCPU never quits the PMI loop.
    
    Turns out that Linux PMI handler actually does write the counter with
    the value just read with RDPMC, so when no full-width support is exposed
    via MSR_IA32_PERF_CAPABILITIES, and the guest initializes the counter to
    a negative value, it locks up.
    
    This has been observed in the field, for example, when the guest configures
    atop to use perfevents and runs two instances of it simultaneously.
    
    To address the problem, maintain the invariant that the counter value
    always fits in the defined bit width, by truncating the received value
    in the respective set_msr methods.  For better readability, factor the
    out into a helper function, pmc_write_counter(), shared by vmx and svm
    parts.
    
    Fixes: 9cd803d496e7 ("KVM: x86: Update vPMCs when retiring instructions")
    Cc: stable@xxxxxxxxxxxxxxx
    Signed-off-by: Roman Kagan <rkagan@xxxxxxxxx>
    Link: https://lore.kernel.org/all/20230504120042.785651-1-rkagan@xxxxxxxxx
    Tested-by: Like Xu <likexu@xxxxxxxxxxx>
    [sean: tweak changelog, s/set/write in the helper]
    Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index c976490b75568..3666578b88a00 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -63,6 +63,12 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc)
 	return counter & pmc_bitmask(pmc);
 }
 
+static inline void pmc_write_counter(struct kvm_pmc *pmc, u64 val)
+{
+	pmc->counter += val - pmc_read_counter(pmc);
+	pmc->counter &= pmc_bitmask(pmc);
+}
+
 static inline void pmc_release_perf_event(struct kvm_pmc *pmc)
 {
 	if (pmc->perf_event) {
diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
index 9d65cd095691b..1cb2bf9808f57 100644
--- a/arch/x86/kvm/svm/pmu.c
+++ b/arch/x86/kvm/svm/pmu.c
@@ -149,7 +149,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	/* MSR_PERFCTRn */
 	pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_COUNTER);
 	if (pmc) {
-		pmc->counter += data - pmc_read_counter(pmc);
+		pmc_write_counter(pmc, data);
 		pmc_update_sample_period(pmc);
 		return 0;
 	}
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 9fabfe71fd879..9a75a0d5deae1 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -461,11 +461,11 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			if (!msr_info->host_initiated &&
 			    !(msr & MSR_PMC_FULL_WIDTH_BIT))
 				data = (s64)(s32)data;
-			pmc->counter += data - pmc_read_counter(pmc);
+			pmc_write_counter(pmc, data);
 			pmc_update_sample_period(pmc);
 			return 0;
 		} else if ((pmc = get_fixed_pmc(pmu, msr))) {
-			pmc->counter += data - pmc_read_counter(pmc);
+			pmc_write_counter(pmc, data);
 			pmc_update_sample_period(pmc);
 			return 0;
 		} else if ((pmc = get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0))) {



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux