On 3/25/20 1:07 PM, Baoquan He wrote:
On 03/25/20 at 03:06pm, Baoquan He wrote:
On 03/25/20 at 08:49am, Aneesh Kumar K.V wrote:
mm/sparse.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/sparse.c b/mm/sparse.c
index aadb7298dcef..3012d1f3771a 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -781,6 +781,8 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
ms->usage = NULL;
}
memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+ /* Mark the section invalid */
+ ms->section_mem_map &= ~SECTION_HAS_MEM_MAP;
Not sure if we should add checking in valid_section() or pfn_valid(),
e.g check ms->usage validation too. Otherwise, this fix looks good to
me.
With SPASEMEM_VMEMAP enabled, we should do validation check on ms->usage
before checking any subsection is valid. Since now we do have case
in which ms->usage is released, people still try to check it.
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index f0a2c184eb9a..d79bd938852e 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1306,6 +1306,8 @@ static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
{
int idx = subsection_map_index(pfn);
+ if (!ms->usage)
+ return 0;
return test_bit(idx, ms->usage->subsection_map);
}
#else
We always check for section valid, before we check if pfn_section_valid().
static inline int pfn_valid(unsigned long pfn)
struct mem_section *ms;
if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
return 0;
ms = __nr_to_section(pfn_to_section_nr(pfn));
if (!valid_section(ms))
return 0;
/*
* Traditionally early sections always returned pfn_valid() for
* the entire section-sized span.
*/
return early_section(ms) || pfn_section_valid(ms, pfn);
}
IMHO adding that if (!ms->usage) is redundant.
-aneesh