On Mon, Jul 31, 2023 at 4:14 PM Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> wrote: > > It is better to use huge_page_size() for hugepage(HugeTLB) instead of > PAGE_SIZE for stride, which has been done in flush_pmd/pud_tlb_range(), > it could reduce the loop in __flush_tlb_range(). > > Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> > --- > arch/arm64/include/asm/tlbflush.h | 21 +++++++++++---------- > 1 file changed, 11 insertions(+), 10 deletions(-) > > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h > index 412a3b9a3c25..25e35e6f8093 100644 > --- a/arch/arm64/include/asm/tlbflush.h > +++ b/arch/arm64/include/asm/tlbflush.h > @@ -360,16 +360,17 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, > dsb(ish); > } > > -static inline void flush_tlb_range(struct vm_area_struct *vma, > - unsigned long start, unsigned long end) > -{ > - /* > - * We cannot use leaf-only invalidation here, since we may be invalidating > - * table entries as part of collapsing hugepages or moving page tables. > - * Set the tlb_level to 0 because we can not get enough information here. > - */ > - __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0); > -} > +/* > + * We cannot use leaf-only invalidation here, since we may be invalidating > + * table entries as part of collapsing hugepages or moving page tables. > + * Set the tlb_level to 0 because we can not get enough information here. > + */ > +#define flush_tlb_range(vma, start, end) \ > + __flush_tlb_range(vma, start, end, \ > + ((vma)->vm_flags & VM_HUGETLB) \ > + ? huge_page_size(hstate_vma(vma)) \ > + : PAGE_SIZE, false, 0) > + seems like a good idea. I wonder if a better implementation will be MMU_GATHER_PAGE_SIZE, in this case, we are going to support stride for other large folios as well, such as thp. > > static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end) > { > -- > 2.41.0 > Thanks Barry