On 06/24/20 at 09:47am, Baoquan He wrote: > On 06/23/20 at 05:21pm, Dan Williams wrote: > > On Tue, Jun 23, 2020 at 2:43 AM Wei Yang > > <richard.weiyang@xxxxxxxxxxxxxxxxx> wrote: > > > > > > For early sections, we assumes its memmap will never be partially > > > removed. But current behavior breaks this. > > > > Where do we assume that? > > > > The primary use case for this was mapping pmem that collides with > > System-RAM in the same 128MB section. That collision will certainly be > > depopulated on-demand depending on the state of the pmem device. So, > > I'm not understanding the problem or the benefit of this change. > > I was also confused when review this patch, the patch log is a little > short and simple. From the current code, with SPARSE_VMEMMAP enabled, we > do build memmap for the whole memory section during boot, even though > some of them may be partially populated. We just mark the subsection map > for present pages. > > Later, if pmem device is mapped into the partially boot memory section, > we just fill the relevant subsection map, do return directly, w/o building > the memmap for it, in section_activate(). Because the memmap for the > unpresent RAM part have been there. I guess this is what Wei is trying to > do to keep the behaviour be consistent for pmem device adding, or OK, from Wei's reply I realized this patch is a necessary fix. If we depoluate the partial memmap for pmem removing, the later pmem re-adding won't have a valid memmap. > pmem device removing and later adding again. > > Please correct me if I am wrong. > > To me, fixing it looks good. But a clear doc or code comment is > necessary so that people can understand the code with less time. > Leaving it as is doesn't cause harm. I personally tend to choose > the former. > > paging_init() > ->sparse_init() > ->sparse_init_nid() > { > ... > for_each_present_section_nr(pnum_begin, pnum) { > ... > map = __populate_section_memmap(pfn, PAGES_PER_SECTION, > nid, NULL); > ... > } > } > ... > ->zone_sizes_init() > ->free_area_init() > { > for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) { > subsection_map_init(start_pfn, end_pfn - start_pfn); > } > { > > __add_pages() > ->sparse_add_section() > ->section_activate() > { > ... > fill_subsection_map(); > if (nr_pages < PAGES_PER_SECTION && early_section(ms)) <----------********* > return pfn_to_page(pfn); > ... > } > > > >