On Fri, Feb 02, 2024 at 08:07:54AM +0000, Ryan Roberts wrote: > When core code iterates over a range of ptes and calls ptep_get() for > each of them, if the range happens to cover contpte mappings, the number > of pte reads becomes amplified by a factor of the number of PTEs in a > contpte block. This is because for each call to ptep_get(), the > implementation must read all of the ptes in the contpte block to which > it belongs to gather the access and dirty bits. > > This causes a hotspot for fork(), as well as operations that unmap > memory such as munmap(), exit and madvise(MADV_DONTNEED). Fortunately we > can fix this by implementing pte_batch_hint() which allows their > iterators to skip getting the contpte tail ptes when gathering the batch > of ptes to operate on. This results in the number of PTE reads returning > to 1 per pte. > > Tested-by: John Hubbard <jhubbard@xxxxxxxxxx> > Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx> Acked-by: Mark Rutland <mark.rutland@xxxxxxx> Mark. > --- > arch/arm64/include/asm/pgtable.h | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index ad04adb7b87f..353ea67b5d75 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -1220,6 +1220,15 @@ static inline void contpte_try_unfold(struct mm_struct *mm, unsigned long addr, > __contpte_try_unfold(mm, addr, ptep, pte); > } > > +#define pte_batch_hint pte_batch_hint > +static inline unsigned int pte_batch_hint(pte_t *ptep, pte_t pte) > +{ > + if (!pte_valid_cont(pte)) > + return 1; > + > + return CONT_PTES - (((unsigned long)ptep >> 3) & (CONT_PTES - 1)); > +} > + > /* > * The below functions constitute the public API that arm64 presents to the > * core-mm to manipulate PTE entries within their page tables (or at least this > -- > 2.25.1 >