On Wed, Dec 09, 2020 at 06:35:09PM +0000, Catalin Marinas wrote: > On Wed, Dec 09, 2020 at 04:39:50PM +0000, Will Deacon wrote: > > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h > > index da250e4741bd..3424f5881390 100644 > > --- a/arch/arm64/include/asm/cpufeature.h > > +++ b/arch/arm64/include/asm/cpufeature.h > > @@ -764,6 +764,18 @@ static inline bool cpu_has_hw_af(void) > > ID_AA64MMFR1_HADBS_SHIFT); > > } > > > > +static inline bool system_has_hw_af(void) > > +{ > > + u64 mmfr1; > > + > > + if (!IS_ENABLED(CONFIG_ARM64_HW_AFDBM)) > > + return false; > > + > > + mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); > > + return cpuid_feature_extract_unsigned_field(mmfr1, > > + ID_AA64MMFR1_HADBS_SHIFT); > > +} > > Could we not add a new system-wide cpu feature that checks for hardware > AF? This read_sanitised_ftr_reg() does a binary search on each > invocation. I posted a diff [1] which would allow removing the binary search for cases where we can pass the register coding as a constant (like this), but honestly, it's not like we have many ID registers so I doubt it really matters in the grand scheme of things. That said, I'm spinning a v2 anyway so I can include it for comments since I haven't posted it as a proper patch before. Will [1] https://lore.kernel.org/r/20201202172727.GC29813@willie-the-truck