The patch titled Subject: arm64: hugetlb: enable __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE has been added to the -mm mm-unstable branch. Its filename is arm64-hugetlb-enable-__have_arch_flush_hugetlb_tlb_range.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/arm64-hugetlb-enable-__have_arch_flush_hugetlb_tlb_range.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Subject: arm64: hugetlb: enable __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE Date: Wed, 2 Aug 2023 09:27:31 +0800 It is better to use huge page size instead of PAGE_SIZE for stride when flush hugepage, which reduces the loop in __flush_tlb_range(). Let's support arch's flush_hugetlb_tlb_range(), which is used in hugetlb_unshare_all_pmds(), move_hugetlb_page_tables() and hugetlb_change_protection() for now. Note, for hugepages based on contiguous bit, it has to be invalidated individually since the contiguous PTE bit is just a hint, the hardware may or may not take it into account. Link: https://lkml.kernel.org/r/20230802012731.62512-1-wangkefeng.wang@xxxxxxxxxx Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Reviewed-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> Reviewed-by: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Barry Song <21cnbao@xxxxxxxxx> Cc: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx> Cc: Kalesh Singh <kaleshsingh@xxxxxxxxxx> Cc: "Kirill A. Shutemov" <kirill@xxxxxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Mina Almasry <almasrymina@xxxxxxxxxx> Cc: Will Deacon <will@xxxxxxxxxx> Cc: William Kucharski <william.kucharski@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- arch/arm64/include/asm/hugetlb.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+) --- a/arch/arm64/include/asm/hugetlb.h~arm64-hugetlb-enable-__have_arch_flush_hugetlb_tlb_range +++ a/arch/arm64/include/asm/hugetlb.h @@ -60,4 +60,19 @@ extern void huge_ptep_modify_prot_commit #include <asm-generic/hugetlb.h> +#define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE +static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma, + unsigned long start, + unsigned long end) +{ + unsigned long stride = huge_page_size(hstate_vma(vma)); + + if (stride == PMD_SIZE) + __flush_tlb_range(vma, start, end, stride, false, 2); + else if (stride == PUD_SIZE) + __flush_tlb_range(vma, start, end, stride, false, 1); + else + __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0); +} + #endif /* __ASM_HUGETLB_H */ _ Patches currently in -mm which might be from wangkefeng.wang@xxxxxxxxxx are mm-remove-arguments-of-show_mem.patch mm-make-show_free_areas-static.patch mm-factor-out-vma-stack-and-heap-checks.patch drm-amdkfd-use-vma_is_initial_stack-and-vma_is_initial_heap.patch selinux-use-vma_is_initial_stack-and-vma_is_initial_heap.patch perf-core-use-vma_is_initial_stack-and-vma_is_initial_heap.patch arm64-hugetlb-enable-__have_arch_flush_hugetlb_tlb_range.patch