Re: [PATCH v2 2/4] mm/sparse: Optimize sparse_add_one_section()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/26/19 at 11:17am, Michal Hocko wrote:
> On Tue 26-03-19 18:08:17, Baoquan He wrote:
> > On 03/26/19 at 10:29am, Michal Hocko wrote:
> > > On Tue 26-03-19 17:02:25, Baoquan He wrote:
> > > > Reorder the allocation of usemap and memmap since usemap allocation
> > > > is much simpler and easier. Otherwise hard work is done to make
> > > > memmap ready, then have to rollback just because of usemap allocation
> > > > failure.
> > > 
> > > Is this really worth it? I can see that !VMEMMAP is doing memmap size
> > > allocation which would be 2MB aka costly allocation but we do not do
> > > __GFP_RETRY_MAYFAIL so the allocator backs off early.
> > 
> > In !VMEMMAP case, it truly does simple allocation directly. surely
> > usemap which size is 32 is smaller. So it doesn't matter that much who's
> > ahead or who's behind. However, this benefit a little in VMEMMAP case.
> 
> How does it help there? The failure should be even much less probable
> there because we simply fall back to a small 4kB pages and those
> essentially never fail.

OK, I am fine to drop it. Or only put the section existence checking
earlier to avoid unnecessary usemap/memmap allocation?


>From 7594b86ebf5d6fcc8146eca8fc5625f1961a15b1 Mon Sep 17 00:00:00 2001
From: Baoquan He <bhe@xxxxxxxxxx>
Date: Tue, 26 Mar 2019 18:48:39 +0800
Subject: [PATCH] mm/sparse: Check section's existence earlier in
 sparse_add_one_section()

No need to allocate usemap and memmap if section has been present.
And can clean up the handling on failure.

Signed-off-by: Baoquan He <bhe@xxxxxxxxxx>
---
 mm/sparse.c | 21 ++++++++-------------
 1 file changed, 8 insertions(+), 13 deletions(-)

diff --git a/mm/sparse.c b/mm/sparse.c
index 363f9d31b511..f564b531e0f7 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -714,7 +714,13 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
 	ret = sparse_index_init(section_nr, nid);
 	if (ret < 0 && ret != -EEXIST)
 		return ret;
-	ret = 0;
+
+	ms = __pfn_to_section(start_pfn);
+	if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
+		ret = -EEXIST;
+		goto out;
+	}
+
 	memmap = kmalloc_section_memmap(section_nr, nid, altmap);
 	if (!memmap)
 		return -ENOMEM;
@@ -724,12 +730,6 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
 		return -ENOMEM;
 	}
 
-	ms = __pfn_to_section(start_pfn);
-	if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
-		ret = -EEXIST;
-		goto out;
-	}
-
 	/*
 	 * Poison uninitialized struct pages in order to catch invalid flags
 	 * combinations.
@@ -739,12 +739,7 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
 	section_mark_present(ms);
 	sparse_init_one_section(ms, section_nr, memmap, usemap);
 
-out:
-	if (ret < 0) {
-		kfree(usemap);
-		__kfree_section_memmap(memmap, altmap);
-	}
-	return ret;
+	return 0;
 }
 
 #ifdef CONFIG_MEMORY_HOTREMOVE
-- 
2.17.2




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux