On 03/12/20 at 06:34am, Matthew Wilcox wrote: > On Thu, Mar 12, 2020 at 09:08:22PM +0800, Baoquan He wrote: > > This change makes populate_section_memmap()/depopulate_section_memmap > > much simpler. > > > > Suggested-by: Michal Hocko <mhocko@xxxxxxxxxx> > > Signed-off-by: Baoquan He <bhe@xxxxxxxxxx> > > --- > > v1->v2: > > The old version only used __get_free_pages() to replace alloc_pages() > > in populate_section_memmap(). > > http://lkml.kernel.org/r/20200307084229.28251-8-bhe@xxxxxxxxxx > > > > mm/sparse.c | 27 +++------------------------ > > 1 file changed, 3 insertions(+), 24 deletions(-) > > > > diff --git a/mm/sparse.c b/mm/sparse.c > > index bf6c00a28045..362018e82e22 100644 > > --- a/mm/sparse.c > > +++ b/mm/sparse.c > > @@ -734,35 +734,14 @@ static void free_map_bootmem(struct page *memmap) > > struct page * __meminit populate_section_memmap(unsigned long pfn, > > unsigned long nr_pages, int nid, struct vmem_altmap *altmap) > > { > > - struct page *page, *ret; > > - unsigned long memmap_size = sizeof(struct page) * PAGES_PER_SECTION; > > - > > - page = alloc_pages(GFP_KERNEL|__GFP_NOWARN, get_order(memmap_size)); > > - if (page) > > - goto got_map_page; > > - > > - ret = vmalloc(memmap_size); > > - if (ret) > > - goto got_map_ptr; > > - > > - return NULL; > > -got_map_page: > > - ret = (struct page *)pfn_to_kaddr(page_to_pfn(page)); > > -got_map_ptr: > > - > > - return ret; > > + return kvmalloc_node(sizeof(struct page) * PAGES_PER_SECTION, > > + GFP_KERNEL|__GFP_NOWARN, nid); > > Use of NOWARN here is inappropriate, because there's no fallback. kvmalloc_node has added __GFP_NOWARN internally when try to allocate continuous pages. I will remove it. > Also, I'd use array_size(sizeof(struct page), PAGES_PER_SECTION). It's fine to me, even though we know it has no risk to overflow. I will use array_size. Thanks.