On Fri, Jun 14, 2019 at 11:03 AM Dan Williams <dan.j.williams@xxxxxxxxx> wrote: > > On Fri, Jun 14, 2019 at 7:59 AM Qian Cai <cai@xxxxxx> wrote: > > > > On Fri, 2019-06-14 at 14:28 +0530, Aneesh Kumar K.V wrote: > > > Qian Cai <cai@xxxxxx> writes: > > > > > > > > > > 1) offline is busted [1]. It looks like test_pages_in_a_zone() missed the > > > > same > > > > pfn_section_valid() check. > > > > > > > > 2) powerpc booting is generating endless warnings [2]. In > > > > vmemmap_populated() at > > > > arch/powerpc/mm/init_64.c, I tried to change PAGES_PER_SECTION to > > > > PAGES_PER_SUBSECTION, but it alone seems not enough. > > > > > > > > > > Can you check with this change on ppc64. I haven't reviewed this series yet. > > > I did limited testing with change . Before merging this I need to go > > > through the full series again. The vmemmap poplulate on ppc64 needs to > > > handle two translation mode (hash and radix). With respect to vmemap > > > hash doesn't setup a translation in the linux page table. Hence we need > > > to make sure we don't try to setup a mapping for a range which is > > > arleady convered by an existing mapping. > > > > It works fine. > > Strange... it would only change behavior if valid_section() is true > when pfn_valid() is not or vice versa. They "should" be identical > because subsection-size == section-size on PowerPC, at least with the > current definition of SUBSECTION_SHIFT. I suspect maybe > free_area_init_nodes() is too late to call subsection_map_init() for > PowerPC. Can you give the attached incremental patch a try? This will break support for doing sub-section hot-add in a section that was only partially populated early at init, but that can be repaired later in the series. First things first, don't regress. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 874eb22d22e4..520c83aa0fec 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7286,12 +7286,10 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn) /* Print out the early node map */ pr_info("Early memory node ranges\n"); - for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) { + for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) pr_info(" node %3d: [mem %#018Lx-%#018Lx]\n", nid, (u64)start_pfn << PAGE_SHIFT, ((u64)end_pfn << PAGE_SHIFT) - 1); - subsection_map_init(start_pfn, end_pfn - start_pfn); - } /* Initialise every node */ mminit_verify_pageflags_layout(); diff --git a/mm/sparse.c b/mm/sparse.c index 0baa2e55cfdd..bca8e6fa72d2 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -533,6 +533,7 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin, } check_usemap_section_nr(nid, usage); sparse_init_one_section(__nr_to_section(pnum), pnum, map, usage); + subsection_map_init(section_nr_to_pfn(pnum), PAGES_PER_SECTION); usage = (void *) usage + mem_section_usage_size(); } sparse_buffer_fini();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 874eb22d22e4..520c83aa0fec 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7286,12 +7286,10 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn) /* Print out the early node map */ pr_info("Early memory node ranges\n"); - for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) { + for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) pr_info(" node %3d: [mem %#018Lx-%#018Lx]\n", nid, (u64)start_pfn << PAGE_SHIFT, ((u64)end_pfn << PAGE_SHIFT) - 1); - subsection_map_init(start_pfn, end_pfn - start_pfn); - } /* Initialise every node */ mminit_verify_pageflags_layout(); diff --git a/mm/sparse.c b/mm/sparse.c index 0baa2e55cfdd..bca8e6fa72d2 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -533,6 +533,7 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin, } check_usemap_section_nr(nid, usage); sparse_init_one_section(__nr_to_section(pnum), pnum, map, usage); + subsection_map_init(section_nr_to_pfn(pnum), PAGES_PER_SECTION); usage = (void *) usage + mem_section_usage_size(); } sparse_buffer_fini();