On 06/16/22 20:05, Baoquan He wrote: > On 06/16/22 at 11:34am, Baolin Wang wrote: > > The HugeTLB address ranges are linearly scanned during fork, unmap and > > remap operations, and the linear scan can skip to the end of range mapped > > by the page table page if hitting a non-present entry, which can help > > to speed linear scanning of the HugeTLB address ranges. > > > > So hugetlb_mask_last_hp() is introduced to help to update the address in > > the loop of HugeTLB linear scanning with getting the last huge page mapped > > by the associated page table page[1], when a non-present entry is encountered. > > > > Considering ARM64 specific cont-pte/pmd size HugeTLB, this patch implemented > > an ARM64 specific hugetlb_mask_last_hp() to help this case. > > > > [1] https://lore.kernel.org/linux-mm/20220527225849.284839-1-mike.kravetz@xxxxxxxxxx/ > > > > Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> > > --- > > Note: this patch is based on the series: "hugetlb: speed up linear > > address scanning" from Mike. Mike, please fold it into your series. > > Thanks. > > --- > > arch/arm64/mm/hugetlbpage.c | 20 ++++++++++++++++++++ > > 1 file changed, 20 insertions(+) > > > > diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c > > index e2a5ec9..958935c 100644 > > --- a/arch/arm64/mm/hugetlbpage.c > > +++ b/arch/arm64/mm/hugetlbpage.c > > @@ -368,6 +368,26 @@ pte_t *huge_pte_offset(struct mm_struct *mm, > > return NULL; > > } > > > > +unsigned long hugetlb_mask_last_hp(struct hstate *h) > > +{ > > + unsigned long hp_size = huge_page_size(h); > > hp_size may not be a good name, it reminds me of hotplug. I would name > it hpage_size even though a little more characters are added. > How about just hugetlb_mask_last_page? Since the routine is prefixed with 'hugetlb' and we are passing in a pointer to a hstate, I think there is enough context to know we are talking about a huge page mask as opposed to a base page mask. If OK, I will change the name in my patches and here. -- Mike Kravetz