The patch titled Subject: mm/sparse: consistently do not zero memmap has been removed from the -mm tree. Its filename was mm-sparse-consistently-do-not-zero-memmap.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: Vincent Whitchurch <vincent.whitchurch@xxxxxxxx> Subject: mm/sparse: consistently do not zero memmap sparsemem without VMEMMAP has two allocation paths to allocate the memory needed for its memmap (done in sparse_mem_map_populate()). In one allocation path (sparse_buffer_alloc() succeeds), the memory is not zeroed (since it was previously allocated with memblock_alloc_try_nid_raw()). In the other allocation path (sparse_buffer_alloc() fails and sparse_mem_map_populate() falls back to memblock_alloc_try_nid()), the memory is zeroed. AFAICS this difference does not appear to be on purpose. If the code is supposed to work with non-initialized memory (__init_single_page() takes care of zeroing the struct pages which are actually used), we should consistently not zero the memory, to avoid masking bugs. (I noticed this because on my ARM64 platform, with 1 GiB of memory the first [and only] section is allocated from the zeroing path while with 2 GiB of memory the first 1 GiB section is allocated from the non-zeroing path.) Michal: "the main user visible problem is a memory wastage. The overal amount of memory should be small. I wouldn't call it stable material." Link: http://lkml.kernel.org/r/20191030131122.8256-1-vincent.whitchurch@xxxxxxxx Signed-off-by: Vincent Whitchurch <vincent.whitchurch@xxxxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxxx> Acked-by: David Hildenbrand <david@xxxxxxxxxx> Reviewed-by: Oscar Salvador <osalvador@xxxxxxx> Reviewed-by: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/sparse.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/sparse.c~mm-sparse-consistently-do-not-zero-memmap +++ a/mm/sparse.c @@ -458,7 +458,7 @@ struct page __init *__populate_section_m if (map) return map; - map = memblock_alloc_try_nid(size, + map = memblock_alloc_try_nid_raw(size, PAGE_SIZE, addr, MEMBLOCK_ALLOC_ACCESSIBLE, nid); if (!map) _ Patches currently in -mm which might be from vincent.whitchurch@xxxxxxxx are