On arm64, ID_AA64MMFR0_EL1.PARange encodes the maximum Physical Address range supported by the CPU. Add a helper to decode this to actual physical shift. If we hit an unallocated value, return the maximum range supported by the kernel. This will be used by KVM to set the VTCR_EL2.T0SZ, as it is about to move its place. Having this helper keeps the code movement cleaner. Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Marc Zyngier <marc.zyngier@xxxxxxx> Cc: James Morse <james.morse@xxxxxxx> Cc: Christoffer Dall <cdall@xxxxxxxxxx> Reviewed-by: Eric Auger <eric.auger@xxxxxxxxxx> Signed-off-by: Suzuki K Poulose <suzuki.poulose@xxxxxxx> --- Changes since V2: - Split the patch - Limit the physical shift only for values unrecognized. --- arch/arm64/include/asm/cpufeature.h | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 1717ba1..855cf0e 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -530,6 +530,19 @@ void arm64_set_ssbd_mitigation(bool state); static inline void arm64_set_ssbd_mitigation(bool state) {} #endif +static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange) +{ + switch (parange) { + case 0: return 32; + case 1: return 36; + case 2: return 40; + case 3: return 42; + case 4: return 44; + case 5: return 48; + case 6: return 52; + default: return CONFIG_ARM64_PA_BITS; + } +} #endif /* __ASSEMBLY__ */ #endif -- 2.7.4