Re: [PATCH] kvm: x86: Add Intel PMU MSRs to msrs_to_save[]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 9/6/19 1:30 PM, Jim Mattson wrote:
On Fri, Sep 6, 2019 at 12:59 PM Krish Sadhukhan
<krish.sadhukhan@xxxxxxxxxx> wrote:

On 9/6/19 9:48 AM, Jim Mattson wrote:

On Wed, Aug 21, 2019 at 11:20 AM Jim Mattson <jmattson@xxxxxxxxxx> wrote:

These MSRs should be enumerated by KVM_GET_MSR_INDEX_LIST, so that
userspace knows that these MSRs may be part of the vCPU state.

Signed-off-by: Jim Mattson <jmattson@xxxxxxxxxx>
Reviewed-by: Eric Hankland <ehankland@xxxxxxxxxx>
Reviewed-by: Peter Shier <pshier@xxxxxxxxxx>

---
  arch/x86/kvm/x86.c | 41 +++++++++++++++++++++++++++++++++++++++++
  1 file changed, 41 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 93b0bd45ac73..ecaaa411538f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1140,6 +1140,42 @@ static u32 msrs_to_save[] = {
         MSR_IA32_RTIT_ADDR1_A, MSR_IA32_RTIT_ADDR1_B,
         MSR_IA32_RTIT_ADDR2_A, MSR_IA32_RTIT_ADDR2_B,
         MSR_IA32_RTIT_ADDR3_A, MSR_IA32_RTIT_ADDR3_B,
+       MSR_ARCH_PERFMON_FIXED_CTR0, MSR_ARCH_PERFMON_FIXED_CTR1,
+       MSR_ARCH_PERFMON_FIXED_CTR0 + 2, MSR_ARCH_PERFMON_FIXED_CTR0 + 3,
+       MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS,
+       MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL,
+       MSR_ARCH_PERFMON_PERFCTR0, MSR_ARCH_PERFMON_PERFCTR1,
+       MSR_ARCH_PERFMON_PERFCTR0 + 2, MSR_ARCH_PERFMON_PERFCTR0 + 3,
+       MSR_ARCH_PERFMON_PERFCTR0 + 4, MSR_ARCH_PERFMON_PERFCTR0 + 5,
+       MSR_ARCH_PERFMON_PERFCTR0 + 6, MSR_ARCH_PERFMON_PERFCTR0 + 7,
+       MSR_ARCH_PERFMON_PERFCTR0 + 8, MSR_ARCH_PERFMON_PERFCTR0 + 9,
+       MSR_ARCH_PERFMON_PERFCTR0 + 10, MSR_ARCH_PERFMON_PERFCTR0 + 11,
+       MSR_ARCH_PERFMON_PERFCTR0 + 12, MSR_ARCH_PERFMON_PERFCTR0 + 13,
+       MSR_ARCH_PERFMON_PERFCTR0 + 14, MSR_ARCH_PERFMON_PERFCTR0 + 15,
+       MSR_ARCH_PERFMON_PERFCTR0 + 16, MSR_ARCH_PERFMON_PERFCTR0 + 17,
+       MSR_ARCH_PERFMON_PERFCTR0 + 18, MSR_ARCH_PERFMON_PERFCTR0 + 19,
+       MSR_ARCH_PERFMON_PERFCTR0 + 20, MSR_ARCH_PERFMON_PERFCTR0 + 21,
+       MSR_ARCH_PERFMON_PERFCTR0 + 22, MSR_ARCH_PERFMON_PERFCTR0 + 23,
+       MSR_ARCH_PERFMON_PERFCTR0 + 24, MSR_ARCH_PERFMON_PERFCTR0 + 25,
+       MSR_ARCH_PERFMON_PERFCTR0 + 26, MSR_ARCH_PERFMON_PERFCTR0 + 27,
+       MSR_ARCH_PERFMON_PERFCTR0 + 28, MSR_ARCH_PERFMON_PERFCTR0 + 29,
+       MSR_ARCH_PERFMON_PERFCTR0 + 30, MSR_ARCH_PERFMON_PERFCTR0 + 31,
+       MSR_ARCH_PERFMON_EVENTSEL0, MSR_ARCH_PERFMON_EVENTSEL1,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 2, MSR_ARCH_PERFMON_EVENTSEL0 + 3,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 4, MSR_ARCH_PERFMON_EVENTSEL0 + 5,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 6, MSR_ARCH_PERFMON_EVENTSEL0 + 7,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 8, MSR_ARCH_PERFMON_EVENTSEL0 + 9,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 10, MSR_ARCH_PERFMON_EVENTSEL0 + 11,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 18, MSR_ARCH_PERFMON_EVENTSEL0 + 19,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 20, MSR_ARCH_PERFMON_EVENTSEL0 + 21,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 22, MSR_ARCH_PERFMON_EVENTSEL0 + 23,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 24, MSR_ARCH_PERFMON_EVENTSEL0 + 25,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 26, MSR_ARCH_PERFMON_EVENTSEL0 + 27,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 28, MSR_ARCH_PERFMON_EVENTSEL0 + 29,
+       MSR_ARCH_PERFMON_EVENTSEL0 + 30, MSR_ARCH_PERFMON_EVENTSEL0 + 31,
  };


Should we have separate #defines for the MSRs that are at offset from the base MSR?
How about macros that take an offset argument, rather than a whole
slew of new macros?


Yes, that works too.



  static unsigned num_msrs_to_save;
@@ -4989,6 +5025,11 @@ static void kvm_init_msr_list(void)
         u32 dummy[2];
         unsigned i, j;

+       BUILD_BUG_ON_MSG(INTEL_PMC_MAX_FIXED != 4,
+                        "Please update the fixed PMCs in msrs_to_save[]");
+       BUILD_BUG_ON_MSG(INTEL_PMC_MAX_GENERIC != 32,
+                        "Please update the generic perfctr/eventsel MSRs in msrs_to_save[]");


Just curious how the condition can ever become false because we are comparing two static numbers here.
Someone just has to change the macros. In fact, I originally developed
this change on a version of the kernel where INTEL_PMC_MAX_FIXED was
3, and so I had:

+       BUILD_BUG_ON_MSG(INTEL_PMC_MAX_FIXED != 3,
+                        "Please update the fixed PMCs in msrs_to_save[]")
When I cherry-picked the change to Linux tip, the BUILD_BUG_ON fired,
and I updated the fixed PMCs in msrs_to_save[].

+
         for (i = j = 0; i < ARRAY_SIZE(msrs_to_save); i++) {
                 if (rdmsr_safe(msrs_to_save[i], &dummy[0], &dummy[1]) < 0)
                         continue;
--
2.23.0.187.g17f5b7556c-goog

Ping.


Also, since these MSRs are Intel-specific, should these be enumerated via 'intel_pmu_ops' ?
msrs_to_save[] is filtered to remove MSRs that aren't supported on the
host. Or are you asking something else?


I am referring to the fact that we are enumerating Intel-specific MSRs in the generic KVM code. Should there be some sort of #define guard to not compile the code on AMD ?





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux