The patch titled mm: allocate section_map for sparse_init has been added to the -mm tree. Its filename is mm-allocate-section_map-for-sparse_init.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** See http://www.zip.com.au/~akpm/linux/patches/stuff/added-to-mm.txt to find out what to do about this The current -mm tree may be found at http://userweb.kernel.org/~akpm/mmotm/ ------------------------------------------------------ Subject: mm: allocate section_map for sparse_init From: "Yinghai Lu" <yhlu.kernel@xxxxxxxxx> Allocate section_map in bootmem instead of using __initdata. Signed-off-by: Yinghai Lu <yhlu.kernel@xxxxxxxxx> Cc: Andi Kleen <ak@xxxxxxx> Cc: Yasunori Goto <y-goto@xxxxxxxxxxxxxx> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxx> Cc: Christoph Lameter <clameter@xxxxxxx> Cc: Mel Gorman <mel@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/sparse.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff -puN mm/sparse.c~mm-allocate-section_map-for-sparse_init mm/sparse.c --- a/mm/sparse.c~mm-allocate-section_map-for-sparse_init +++ a/mm/sparse.c @@ -287,8 +287,6 @@ struct page __init *sparse_early_mem_map return NULL; } -/* section_map pointer array is 64k */ -static __initdata struct page *section_map[NR_MEM_SECTIONS]; /* * Allocate the accumulated non-linear sections, allocate a mem_map * for each and record the physical to section mapping. @@ -298,6 +296,9 @@ void __init sparse_init(void) unsigned long pnum; struct page *map; unsigned long *usemap; + struct page **section_map; + int size; + int node; /* * map is using big page (aka 2M in x86 64 bit) @@ -307,13 +308,17 @@ void __init sparse_init(void) * then in big system, the memmory will have a lot hole... * here try to allocate 2M pages continously. */ + size = sizeof(struct page *) * NR_MEM_SECTIONS; + section_map = alloc_bootmem(size); + if (!section_map) + panic("can not allocate section_map\n"); + for (pnum = 0; pnum < NR_MEM_SECTIONS; pnum++) { if (!present_section_nr(pnum)) continue; section_map[pnum] = sparse_early_mem_map_alloc(pnum); } - for (pnum = 0; pnum < NR_MEM_SECTIONS; pnum++) { if (!present_section_nr(pnum)) continue; @@ -329,6 +334,9 @@ void __init sparse_init(void) sparse_init_one_section(__nr_to_section(pnum), pnum, map, usemap); } + + for_each_online_node(node) + free_bootmem_node(NODE_DATA(node), __pa(section_map), size); } #ifdef CONFIG_MEMORY_HOTPLUG _ Patches currently in -mm which might be from yhlu.kernel@xxxxxxxxx are mm-fix-boundary-checking-in-free_bootmem_core.patch git-x86.patch mm-make-mem_map-allocation-continuous.patch mm-make-mem_map-allocation-continuous-checkpatch-fixes.patch mm-fix-alloc_bootmem_core-to-use-fast-searching-for-all-nodes.patch mm-allocate-section_map-for-sparse_init.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html