On Thu, Mar 12, 2020 at 07:25:35AM -0700, Matthew Wilcox wrote: >On Thu, Mar 12, 2020 at 02:18:26PM +0000, Wei Yang wrote: >> On Thu, Mar 12, 2020 at 06:34:16AM -0700, Matthew Wilcox wrote: >> >On Thu, Mar 12, 2020 at 09:08:22PM +0800, Baoquan He wrote: >> >> This change makes populate_section_memmap()/depopulate_section_memmap >> >> much simpler. >> >> >> >> Suggested-by: Michal Hocko <mhocko@xxxxxxxxxx> >> >> Signed-off-by: Baoquan He <bhe@xxxxxxxxxx> >> >> --- >> >> v1->v2: >> >> The old version only used __get_free_pages() to replace alloc_pages() >> >> in populate_section_memmap(). >> >> http://lkml.kernel.org/r/20200307084229.28251-8-bhe@xxxxxxxxxx >> >> >> >> mm/sparse.c | 27 +++------------------------ >> >> 1 file changed, 3 insertions(+), 24 deletions(-) >> >> >> >> diff --git a/mm/sparse.c b/mm/sparse.c >> >> index bf6c00a28045..362018e82e22 100644 >> >> --- a/mm/sparse.c >> >> +++ b/mm/sparse.c >> >> @@ -734,35 +734,14 @@ static void free_map_bootmem(struct page *memmap) >> >> struct page * __meminit populate_section_memmap(unsigned long pfn, >> >> unsigned long nr_pages, int nid, struct vmem_altmap *altmap) >> >> { >> >> - struct page *page, *ret; >> >> - unsigned long memmap_size = sizeof(struct page) * PAGES_PER_SECTION; >> >> - >> >> - page = alloc_pages(GFP_KERNEL|__GFP_NOWARN, get_order(memmap_size)); >> >> - if (page) >> >> - goto got_map_page; >> >> - >> >> - ret = vmalloc(memmap_size); >> >> - if (ret) >> >> - goto got_map_ptr; >> >> - >> >> - return NULL; >> >> -got_map_page: >> >> - ret = (struct page *)pfn_to_kaddr(page_to_pfn(page)); >> >> -got_map_ptr: >> >> - >> >> - return ret; >> >> + return kvmalloc_node(sizeof(struct page) * PAGES_PER_SECTION, >> >> + GFP_KERNEL|__GFP_NOWARN, nid); >> > >> >Use of NOWARN here is inappropriate, because there's no fallback. >> >> Hmm... this replacement is a little tricky. >> >> When you look into kvmalloc_node(), it will do the fallback if the size is >> bigger than PAGE_SIZE. This means the change here may not be equivalent as >> before if memmap_size is less than PAGE_SIZE. >> >> For example if : >> PAGE_SIZE = 64K >> SECTION_SIZE = 128M >> >> would lead to memmap_size = 2K, which is less than PAGE_SIZE. > >Yes, I thought about that. I decided it wasn't a problem, as long as >the struct page remains aligned, and we now have a guarantee that allocations >above 512 bytes in size are aligned. With a 64 byte struct page, as long Where is this 512 bytes condition comes from? >as we're allocating at least 8 pages, we know it'll be naturally aligned. > >Your calculation doesn't take into account the size of struct page. >128M / 64k is indeed 2k, but you forgot to multiply by 64, which takes >us to 128kB. You are right. While would there be other combination? Or in the future? For example, there are definitions of #define SECTION_SIZE_BITS 26 #define SECTION_SIZE_BITS 24 Are we sure it won't break some thing? -- Wei Yang Help you, Help me