The patch titled Subject: mm/hugetlb: move page order check inside hugetlb_cma_reserve() has been added to the -mm mm-unstable branch. Its filename is mm-hugetlb-move-page-order-check-inside-hugetlb_cma_reserve.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-hugetlb-move-page-order-check-inside-hugetlb_cma_reserve.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Anshuman Khandual <anshuman.khandual@xxxxxxx> Subject: mm/hugetlb: move page order check inside hugetlb_cma_reserve() Date: Fri, 9 Feb 2024 11:12:21 +0530 All platforms could benefit from page order check against MAX_PAGE_ORDER before allocating a CMA area for gigantic hugetlb pages. Let's move this check from individual platforms to generic hugetlb. Link: https://lkml.kernel.org/r/20240209054221.1403364-1-anshuman.khandual@xxxxxxx Signed-off-by: Anshuman Khandual <anshuman.khandual@xxxxxxx> Reviewed-by: Jane Chu <jane.chu@xxxxxxxxxx> Reviewed-by: David Hildenbrand <david@xxxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Will Deacon <will@xxxxxxxxxx> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Cc: Nicholas Piggin <npiggin@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- arch/arm64/mm/hugetlbpage.c | 7 ------- arch/powerpc/mm/hugetlbpage.c | 4 +--- mm/hugetlb.c | 7 +++++++ 3 files changed, 8 insertions(+), 10 deletions(-) --- a/arch/arm64/mm/hugetlbpage.c~mm-hugetlb-move-page-order-check-inside-hugetlb_cma_reserve +++ a/arch/arm64/mm/hugetlbpage.c @@ -45,13 +45,6 @@ void __init arm64_hugetlb_cma_reserve(vo else order = CONT_PMD_SHIFT - PAGE_SHIFT; - /* - * HugeTLB CMA reservation is required for gigantic - * huge pages which could not be allocated via the - * page allocator. Just warn if there is any change - * breaking this assumption. - */ - WARN_ON(order <= MAX_PAGE_ORDER); hugetlb_cma_reserve(order); } #endif /* CONFIG_CMA */ --- a/arch/powerpc/mm/hugetlbpage.c~mm-hugetlb-move-page-order-check-inside-hugetlb_cma_reserve +++ a/arch/powerpc/mm/hugetlbpage.c @@ -614,8 +614,6 @@ void __init gigantic_hugetlb_cma_reserve */ order = mmu_psize_to_shift(MMU_PAGE_16G) - PAGE_SHIFT; - if (order) { - VM_WARN_ON(order <= MAX_PAGE_ORDER); + if (order) hugetlb_cma_reserve(order); - } } --- a/mm/hugetlb.c~mm-hugetlb-move-page-order-check-inside-hugetlb_cma_reserve +++ a/mm/hugetlb.c @@ -7800,6 +7800,13 @@ void __init hugetlb_cma_reserve(int orde bool node_specific_cma_alloc = false; int nid; + /* + * HugeTLB CMA reservation is required for gigantic + * huge pages which could not be allocated via the + * page allocator. Just warn if there is any change + * breaking this assumption. + */ + VM_WARN_ON(order <= MAX_PAGE_ORDER); cma_reserve_called = true; if (!hugetlb_cma_size) _ Patches currently in -mm which might be from anshuman.khandual@xxxxxxx are mm-memblock-add-memblock_rsrv_noinit-into-flagname-array.patch mm-cma-dont-treat-bad-input-arguments-for-cma_alloc-as-its-failure.patch mm-cma-drop-config_cma_debug.patch mm-cma-make-max_cma_areas-=-config_cma_areas.patch mm-cma-add-sysfs-file-release_pages_success.patch mm-hugetlb-move-page-order-check-inside-hugetlb_cma_reserve.patch