On 2/16/24 06:52, Andrew Morton wrote: > On Thu, 15 Feb 2024 10:04:05 +0530 Anshuman Khandual <anshuman.khandual@xxxxxxx> wrote: > >> HugeTLB CMA area array is being created for possible MAX_NUMNODES without >> ensuring corresponding MAX_CMA_AREAS support in CMA. This fails the build >> for such scenarios indicating need for CONFIG_CMA_AREAS adjustment. >> >> ... >> >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -7743,6 +7743,13 @@ void __init hugetlb_cma_reserve(int order) >> } >> >> reserved = 0; >> + >> + /* >> + * There needs to be enough MAX_CMA_AREAS to accommodate >> + * MAX_NUMNODES heap areas being created here. Otherwise >> + * adjust CONFIG_CMA_AREAS as required. >> + */ >> + BUILD_BUG_ON(MAX_CMA_AREAS < MAX_NUMNODES); >> for_each_online_node(nid) { >> int res; > > This blew up my x86_64 allmodconfig build. I didn't check whether this > is because x86_64 kconfig is broken or because the test is bogus. > > I won't be releasing a kernel which fails x86_64 allmodconfig. Okay, understood. > > So before adding a new assertion can we please first make a best effort > to implement the fixes which are required to prevent the new assertion > from triggering? Even after applying the previous patch regarding MAX_CMA_AREAS (below), the build still fails on "x86_64 allmodconfig". https://lore.kernel.org/all/20240205051929.298559-1-anshuman.khandual@xxxxxxx/ As defined in arch/x86/Kconfig config NODES_SHIFT int "Maximum NUMA Nodes (as a power of 2)" if !MAXSMP range 1 10 default "10" if MAXSMP default "6" if X86_64 default "3" depends on NUMA help Specify the maximum number of NUMA Nodes available on the target system. Increases memory reserved to accommodate various tables. So with MAXSMP enabled, NODES_SHIFT = 10 and MAX_NUMNODES = 1024 (1 << 10). Incrementing CONFIG_CMA_AREAS appropriately solves the current problem i.e CONFIG_CMA_AREAS = 1024 causes the build to pass. config CMA_AREAS int "Maximum count of the CMA areas" depends on CMA default 20 if NUMA default 8 help CMA allows to create CMA areas for particular purpose, mainly, used as device private area. This parameter sets the maximum number of CMA area in the system. If unsure, leave the default value "8" in UMA and "20" in NUMA. Current default for CMA_AREAS is just 20 with NUMA enabled, hence wondering should CMA_AREAS be defaulted to 1024 but that does not seem feasible for smaller systems or find some x86 specific solutions. Please let me know if there are any suggestions.