The ARM PMU is an optional CPU feature, so we should consult the CPUID registers before accessing any PMU related registers. However the KVM code accesses some PMU registers (PMUSERENR_EL0 and PMSEL_EL0) unconditionally, when activating the traps. This wasn't a problem so far, because every(?) silicon implements the PMU, with KVM guests being the lone exception, and those never ran KVM host code. As this is about to change with nested virt, we need to guard PMU register accesses with a proper CPU feature check. Add a new CPU capability, which marks whether we have at least the basic PMUv3 registers available. Use that in the KVM VHE code to check before accessing the PMU registers. Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx> --- Hi, not sure a new CPU capability isn't a bit over the top here, and we should use a simple static key instead? Cheers, Andre arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/kernel/cpufeature.c | 10 ++++++++++ arch/arm64/kvm/hyp/include/hyp/switch.h | 9 ++++++--- 3 files changed, 18 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index b77d997b173b..e3a002583c43 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -66,7 +66,8 @@ #define ARM64_WORKAROUND_1508412 58 #define ARM64_HAS_LDAPR 59 #define ARM64_KVM_PROTECTED_MODE 60 +#define ARM64_HAS_PMUV3 61 -#define ARM64_NCAPS 61 +#define ARM64_NCAPS 62 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index e99eddec0a46..54d23d38322d 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2154,6 +2154,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, .min_field_value = 1, }, + { + .desc = "ARM PMUv3 support", + .capability = ARM64_HAS_PMUV3, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .sys_reg = SYS_ID_AA64DFR0_EL1, + .sign = FTR_SIGNED, + .field_pos = ID_AA64DFR0_PMUVER_SHIFT, + .matches = has_cpuid_feature, + .min_field_value = 1, + }, {}, }; diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 84473574c2e7..622baf7b7371 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -90,15 +90,18 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu) * counter, which could make a PMXEVCNTR_EL0 access UNDEF at * EL1 instead of being trapped to EL2. */ - write_sysreg(0, pmselr_el0); - write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); + if (cpus_have_final_cap(ARM64_HAS_PMUV3)) { + write_sysreg(0, pmselr_el0); + write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); + } write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); } static inline void __deactivate_traps_common(void) { write_sysreg(0, hstr_el2); - write_sysreg(0, pmuserenr_el0); + if (cpus_have_final_cap(ARM64_HAS_PMUV3)) + write_sysreg(0, pmuserenr_el0); } static inline void ___activate_traps(struct kvm_vcpu *vcpu) -- 2.17.1 _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm