The patch titled Subject: mm/memory_hotplug.c: prevent memory leak when reusing pgdat has been added to the -mm tree. Its filename is mm-hotplug-prevent-memory-leak-when-reuse-pgdat.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-hotplug-prevent-memory-leak-when-reuse-pgdat.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-hotplug-prevent-memory-leak-when-reuse-pgdat.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Wei Yang <richardw.yang@xxxxxxxxxxxxxxx> Subject: mm/memory_hotplug.c: prevent memory leak when reusing pgdat When offlining a node in try_offline_node(), pgdat is not released. So that pgdat could be reused in hotadd_new_pgdat(). While we reallocate pgdat->per_cpu_nodestats if this pgdat is reused. This patch prevents the memory leak by just allocating per_cpu_nodestats when it is a new pgdat. Link: http://lkml.kernel.org/r/20190813020608.10194-1-richardw.yang@xxxxxxxxxxxxxxx Signed-off-by: Wei Yang <richardw.yang@xxxxxxxxxxxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxxx> Cc: Oscar Salvador <OSalvador@xxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory_hotplug.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) --- a/mm/memory_hotplug.c~mm-hotplug-prevent-memory-leak-when-reuse-pgdat +++ a/mm/memory_hotplug.c @@ -919,8 +919,11 @@ static pg_data_t __ref *hotadd_new_pgdat if (!pgdat) return NULL; + pgdat->per_cpu_nodestats = + alloc_percpu(struct per_cpu_nodestat); arch_refresh_nodedata(nid, pgdat); } else { + int cpu; /* * Reset the nr_zones, order and classzone_idx before reuse. * Note that kswapd will init kswapd_classzone_idx properly @@ -929,6 +932,12 @@ static pg_data_t __ref *hotadd_new_pgdat pgdat->nr_zones = 0; pgdat->kswapd_order = 0; pgdat->kswapd_classzone_idx = 0; + for_each_online_cpu(cpu) { + struct per_cpu_nodestat *p; + + p = per_cpu_ptr(pgdat->per_cpu_nodestats, cpu); + memset(p, 0, sizeof(*p)); + } } /* we can use NODE_DATA(nid) from here */ @@ -938,7 +947,6 @@ static pg_data_t __ref *hotadd_new_pgdat /* init node's zones as empty zones, we don't have any present pages.*/ free_area_init_core_hotplug(nid); - pgdat->per_cpu_nodestats = alloc_percpu(struct per_cpu_nodestat); /* * The node we allocated has no zone fallback lists. For avoiding _ Patches currently in -mm which might be from richardw.yang@xxxxxxxxxxxxxxx are mm-remove-redundant-assignment-of-entry.patch mm-hotplug-prevent-memory-leak-when-reuse-pgdat.patch mm-sparse-use-__nr_to_sectionsection_nr-to-get-mem_section.patch mm-mmapc-refine-find_vma_prev-with-rb_last.patch