If CONFIG_NUMA isn't set, there is no need to ensure that the hugetlb cma area belongs to a specific numa node. min/max_low_pfn can be used for limiting the maximum size of the hugetlb_cma area. Also for_each_mem_pfn_range() is defined only if CONFIG_HAVE_MEMBLOCK_NODE_MAP is set, and on arm (unlike most other architectures) it depends on CONFIG_NUMA. This makes the build fail if CONFIG_NUMA isn't set. Signed-off-by: Roman Gushchin <guro@xxxxxx> Reported-by: Andreas Schaufler <andreas.schaufler@xxxxxx> --- mm/hugetlb.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7a20cae7c77a..a6161239abde 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5439,16 +5439,21 @@ void __init hugetlb_cma_reserve(int order) reserved = 0; for_each_node_state(nid, N_ONLINE) { - unsigned long start_pfn, end_pfn; unsigned long min_pfn = 0, max_pfn = 0; - int res, i; + int res; +#ifdef CONFIG_NUMA + unsigned long start_pfn, end_pfn; + int i; for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) { if (!min_pfn) min_pfn = start_pfn; max_pfn = end_pfn; } - +#else + min_pfn = min_low_pfn; + max_pfn = max_low_pfn; +#endif size = max(per_node, hugetlb_cma_size - reserved); size = round_up(size, PAGE_SIZE << order); -- 2.24.1