sparse_buffer_alloc(size) get size of memory from sparsemap_buf after being aligned with the size. However, the size is at least PAGE_ALIGN(sizeof(struct page) * PAGES_PER_SECTION) and usually larger than PAGE_SIZE. Also, the Sparse_buffer_fini() only frees memory between sparsemap_buf and sparsemap_buf_end, since sparsemap_buf may be changed by PTR_ALIGN() first, the aligned space before sparsemap_buf is wasted and no one will touch it. In our ARM32 platform (without SPARSEMEM_VMEMMAP) Sparse_buffer_init Reserve d359c000 - d3e9c000 (9M) Sparse_buffer_alloc Alloc d3a00000 - d3E80000 (4.5M) Sparse_buffer_fini Free d3e80000 - d3e9c000 (~=100k) The reserved memory between d359c000 - d3a00000 (~=4.4M) is unfreed. In ARM64 platform (with SPARSEMEM_VMEMMAP) sparse_buffer_init Reserve ffffffc07d623000 - ffffffc07f623000 (32M) Sparse_buffer_alloc Alloc ffffffc07d800000 - ffffffc07f600000 (30M) Sparse_buffer_fini Free ffffffc07f600000 - ffffffc07f623000 (140K) The reserved memory between ffffffc07d623000 - ffffffc07d800000 (~=1.9M) is unfreed. Let explicit free redundant aligned memory. Signed-off-by: Lecopzer Chen <lecopzer.chen@xxxxxxxxxxxx> Signed-off-by: Mark-PK Tsai <Mark-PK.Tsai@xxxxxxxxxxxx> Cc: YJ Chiang <yj.chiang@xxxxxxxxxxxx> Cc: Lecopzer Chen <lecopzer.chen@xxxxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> Cc: Oscar Salvador <osalvador@xxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Mike Rapoport <rppt@xxxxxxxxxxxxx> Cc: linux-kernel@xxxxxxxxxxxxxxx --- mm/sparse.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/mm/sparse.c b/mm/sparse.c index fd13166949b5..2b3b5be85120 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -428,6 +428,12 @@ struct page __init *sparse_mem_map_populate(unsigned long pnum, int nid, static void *sparsemap_buf __meminitdata; static void *sparsemap_buf_end __meminitdata; +static inline void __init sparse_buffer_free(unsigned long size) +{ + WARN_ON(!sparsemap_buf || size == 0); + memblock_free_early(__pa(sparsemap_buf), size); +} + static void __init sparse_buffer_init(unsigned long size, int nid) { phys_addr_t addr = __pa(MAX_DMA_ADDRESS); @@ -444,7 +450,7 @@ static void __init sparse_buffer_fini(void) unsigned long size = sparsemap_buf_end - sparsemap_buf; if (sparsemap_buf && size > 0) - memblock_free_early(__pa(sparsemap_buf), size); + sparse_buffer_free(size); sparsemap_buf = NULL; } @@ -456,8 +462,12 @@ void * __meminit sparse_buffer_alloc(unsigned long size) ptr = PTR_ALIGN(sparsemap_buf, size); if (ptr + size > sparsemap_buf_end) ptr = NULL; - else + else { + /* Free redundant aligned space */ + if ((unsigned long)(ptr - sparsemap_buf) > 0) + sparse_buffer_free((unsigned long)(ptr - sparsemap_buf)); sparsemap_buf = ptr + size; + } } return ptr; } -- 2.18.0