On 04/02/20 at 10:01am, Michal Hocko wrote: > On Wed 01-04-20 10:51:55, Mike Rapoport wrote: > > Hi, > > > > On Wed, Apr 01, 2020 at 01:42:27PM +0800, Baoquan He wrote: > [...] > > > From above information, we can remove HAVE_MEMBLOCK_NODE_MAP, and > > > replace it with CONFIG_NUMA. That sounds more sensible to store nid into > > > memblock when NUMA support is enabled. > > > > Replacing CONFIG_HAVE_MEMBLOCK_NODE_MAP with CONFIG_NUMA will work, but > > this will not help cleaning up the whole node/zone initialization mess and > > we'll be stuck with two implementations. > > Yeah, this is far from optimal. > > > The overhead of enabling HAVE_MEMBLOCK_NODE_MAP is only for init time as > > most architectures will anyway discard the entire memblock, so having it in > > a UMA arch won't be a problem. The only exception is arm that uses > > memblock for pfn_valid(), here we may also think about a solution to > > compensate the addition of nid to the memblock structures. > > Well, we can make memblock_region->nid defined only for CONFIG_NUMA. > memblock_get_region_node would then unconditionally return 0 on UMA. > Essentially the same way we do NUMA for other MM code. I only see few > direct usage of region->nid. Checked code again, seems HAVE_MEMBLOCK_NODE_MAP is selected directly in all ARCHes which support it. Means HAVE_MEMBLOCK_NODE_MAP is enabled by default on those ARCHes, and has no dependency on CONFIG_NUMA at all. E.g on x86, it just calls free_area_init_nodes() in generic code path, while free_area_init_nodes() is defined in CONFIG_HAVE_MEMBLOCK_NODE_MAP ifdeffery scope. So I tend to agree with Mike to remove HAVE_MEMBLOCK_NODE_MAP firstly on all ARCHes. We can check if it's worth only defining memblock_region->nid for CONFIG_NUMA case after HAVE_MEMBLOCK_NODE_MAP is removed. config X86 def_bool y ... select HAVE_MEMBLOCK_NODE_MAP ...