On Wed, Nov 10, 2021 at 09:15:01PM -0700, Yu Zhao wrote: > Some architectures automatically set the accessed bit in PTEs, e.g., > x86 and arm64 v8.2. On architectures that do not have this capability, > clearing the accessed bit in a PTE triggers a page fault following the > TLB miss of this PTE. > > Being aware of this capability can help make better decisions, i.e., > whether to limit the size of each batch of PTEs and the burst of > batches when clearing the accessed bit. > > Signed-off-by: Yu Zhao <yuzhao@xxxxxxxxxx> > Tested-by: Konstantin Kharlamov <Hi-Angel@xxxxxxxxx> > --- > arch/arm64/include/asm/cpufeature.h | 5 +++++ > arch/arm64/include/asm/pgtable.h | 13 ++++++++----- > arch/arm64/kernel/cpufeature.c | 10 ++++++++++ > arch/arm64/tools/cpucaps | 1 + > arch/x86/include/asm/pgtable.h | 6 +++--- > include/linux/pgtable.h | 13 +++++++++++++ > mm/memory.c | 14 +------------- > 7 files changed, 41 insertions(+), 21 deletions(-) *Please* cc the maintainers on arch patches. I asked you that last time, too: https://lore.kernel.org/r/20210819091923.GA15467@willie-the-truck > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c > index 6ec7036ef7e1..940615d33845 100644 > --- a/arch/arm64/kernel/cpufeature.c > +++ b/arch/arm64/kernel/cpufeature.c > @@ -2157,6 +2157,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = { > .matches = has_hw_dbm, > .cpu_enable = cpu_enable_hw_dbm, > }, > + { > + .desc = "Hardware update of the Access flag", > + .type = ARM64_CPUCAP_SYSTEM_FEATURE, > + .capability = ARM64_HW_AF, > + .sys_reg = SYS_ID_AA64MMFR1_EL1, > + .sign = FTR_UNSIGNED, > + .field_pos = ID_AA64MMFR1_HADBS_SHIFT, > + .min_field_value = 1, > + .matches = has_cpuid_feature, > + }, As before, please don't make this a system feature as it will prohibit onlining of late CPUs with mismatched access flag support and I really don't see that being necessary. You should just be able to use arch_faults_on_old_pte() as-is. Will