On Thu, Jun 16, 2022 at 02:05:16PM -0700, Mike Kravetz wrote: > From: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> > > The HugeTLB address ranges are linearly scanned during fork, unmap and > remap operations, and the linear scan can skip to the end of range mapped > by the page table page if hitting a non-present entry, which can help > to speed linear scanning of the HugeTLB address ranges. > > So hugetlb_mask_last_page() is introduced to help to update the address in > the loop of HugeTLB linear scanning with getting the last huge page mapped > by the associated page table page[1], when a non-present entry is encountered. > > Considering ARM64 specific cont-pte/pmd size HugeTLB, this patch implemented > an ARM64 specific hugetlb_mask_last_page() to help this case. > > [1] https://lore.kernel.org/linux-mm/20220527225849.284839-1-mike.kravetz@xxxxxxxxxx/ > > Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> > Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Acked-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> Thanks.