On Thu 12-03-20 14:18:26, Wei Yang wrote: > On Thu, Mar 12, 2020 at 06:34:16AM -0700, Matthew Wilcox wrote: > >On Thu, Mar 12, 2020 at 09:08:22PM +0800, Baoquan He wrote: > >> This change makes populate_section_memmap()/depopulate_section_memmap > >> much simpler. > >> > >> Suggested-by: Michal Hocko <mhocko@xxxxxxxxxx> > >> Signed-off-by: Baoquan He <bhe@xxxxxxxxxx> > >> --- > >> v1->v2: > >> The old version only used __get_free_pages() to replace alloc_pages() > >> in populate_section_memmap(). > >> http://lkml.kernel.org/r/20200307084229.28251-8-bhe@xxxxxxxxxx > >> > >> mm/sparse.c | 27 +++------------------------ > >> 1 file changed, 3 insertions(+), 24 deletions(-) > >> > >> diff --git a/mm/sparse.c b/mm/sparse.c > >> index bf6c00a28045..362018e82e22 100644 > >> --- a/mm/sparse.c > >> +++ b/mm/sparse.c > >> @@ -734,35 +734,14 @@ static void free_map_bootmem(struct page *memmap) > >> struct page * __meminit populate_section_memmap(unsigned long pfn, > >> unsigned long nr_pages, int nid, struct vmem_altmap *altmap) > >> { > >> - struct page *page, *ret; > >> - unsigned long memmap_size = sizeof(struct page) * PAGES_PER_SECTION; > >> - > >> - page = alloc_pages(GFP_KERNEL|__GFP_NOWARN, get_order(memmap_size)); > >> - if (page) > >> - goto got_map_page; > >> - > >> - ret = vmalloc(memmap_size); > >> - if (ret) > >> - goto got_map_ptr; > >> - > >> - return NULL; > >> -got_map_page: > >> - ret = (struct page *)pfn_to_kaddr(page_to_pfn(page)); > >> -got_map_ptr: > >> - > >> - return ret; > >> + return kvmalloc_node(sizeof(struct page) * PAGES_PER_SECTION, > >> + GFP_KERNEL|__GFP_NOWARN, nid); > > > >Use of NOWARN here is inappropriate, because there's no fallback. > > Hmm... this replacement is a little tricky. > > When you look into kvmalloc_node(), it will do the fallback if the size is > bigger than PAGE_SIZE. This means the change here may not be equivalent as > before if memmap_size is less than PAGE_SIZE. I do not understand your concern to be honest. Even if a sub page memmap size was possible (I haven't checked), I fail to see why kmalloc would fail to allocate while vmalloc would have a bigger chance to succeed. -- Michal Hocko SUSE Labs