On Wed, Jun 27, 2018 at 09:31:14AM +0800, Baoquan He wrote: > In sparse_init(), if CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y, system > will allocate one continuous memory chunk for mem maps on one node and > populate the relevant page tables to map memory section one by one. If > fail to populate for a certain mem section, print warning and its > ->section_mem_map will be cleared to cancel the marking of being present. > Like this, the number of mem sections marked as present could become > less during sparse_init() execution. > > Here just defer the ms->section_mem_map clearing if failed to populate > its page tables until the last for_each_present_section_nr() loop. This > is in preparation for later optimizing the mem map allocation. > > Signed-off-by: Baoquan He <bhe@xxxxxxxxxx> > --- > mm/sparse-vmemmap.c | 1 - > mm/sparse.c | 12 ++++++++---- > 2 files changed, 8 insertions(+), 5 deletions(-) > > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c > index bd0276d5f66b..640e68f8324b 100644 > --- a/mm/sparse-vmemmap.c > +++ b/mm/sparse-vmemmap.c > @@ -303,7 +303,6 @@ void __init sparse_mem_maps_populate_node(struct page **map_map, > ms = __nr_to_section(pnum); > pr_err("%s: sparsemem memory map backing failed some memory will not be available\n", > __func__); > - ms->section_mem_map = 0; Since we are deferring the clearing of section_mem_map, I guess we do not need struct mem_section *ms; ms = __nr_to_section(pnum); anymore, right? > } > > if (vmemmap_buf_start) { > diff --git a/mm/sparse.c b/mm/sparse.c > index 6314303130b0..71ad53da2cd1 100644 > --- a/mm/sparse.c > +++ b/mm/sparse.c > @@ -451,7 +451,6 @@ void __init sparse_mem_maps_populate_node(struct page **map_map, > ms = __nr_to_section(pnum); > pr_err("%s: sparsemem memory map backing failed some memory will not be available\n", > __func__); > - ms->section_mem_map = 0; The same goes here. -- Oscar Salvador SUSE L3