The patch titled Subject: mm: bootmem: try harder to free pages in bulk has been removed from the -mm tree. Its filename was mm-bootmem-try-harder-to-free-pages-in-bulk.patch This patch was dropped because it is obsolete The current -mm tree may be found at http://userweb.kernel.org/~akpm/mmotm/ ------------------------------------------------------ From: Johannes Weiner <hannes@xxxxxxxxxxx> Subject: mm: bootmem: try harder to free pages in bulk The loop that frees pages to the page allocator while bootstrapping tries to free higher-order blocks only when the starting address is aligned to that block size. Otherwise it will free all pages on that node one-by-one. Change it to free individual pages up to the first aligned block and then try higher-order frees from there. Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Uwe Kleine-König <u.kleine-koenig@xxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/bootmem.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff -puN mm/bootmem.c~mm-bootmem-try-harder-to-free-pages-in-bulk mm/bootmem.c --- a/mm/bootmem.c~mm-bootmem-try-harder-to-free-pages-in-bulk +++ a/mm/bootmem.c @@ -171,7 +171,6 @@ void __init free_bootmem_late(unsigned l static unsigned long __init free_all_bootmem_core(bootmem_data_t *bdata) { - int aligned; struct page *page; unsigned long start, end, pages, count = 0; @@ -181,14 +180,8 @@ static unsigned long __init free_all_boo start = bdata->node_min_pfn; end = bdata->node_low_pfn; - /* - * If the start is aligned to the machines wordsize, we might - * be able to free pages in bulks of that order. - */ - aligned = !(start & (BITS_PER_LONG - 1)); - - bdebug("nid=%td start=%lx end=%lx aligned=%d\n", - bdata - bootmem_node_data, start, end, aligned); + bdebug("nid=%td start=%lx end=%lx\n", + bdata - bootmem_node_data, start, end); while (start < end) { unsigned long *map, idx, vec; @@ -196,12 +189,17 @@ static unsigned long __init free_all_boo map = bdata->node_bootmem_map; idx = start - bdata->node_min_pfn; vec = ~map[idx / BITS_PER_LONG]; - - if (aligned && vec == ~0UL) { + /* + * If we have a properly aligned and fully unreserved + * BITS_PER_LONG block of pages in front of us, free + * it in one go. + */ + if (IS_ALIGNED(start, BITS_PER_LONG) && vec == ~0UL) { int order = ilog2(BITS_PER_LONG); __free_pages_bootmem(pfn_to_page(start), order); count += BITS_PER_LONG; + start += BITS_PER_LONG; } else { unsigned long off = 0; @@ -214,8 +212,8 @@ static unsigned long __init free_all_boo vec >>= 1; off++; } + start = ALIGN(start + 1, BITS_PER_LONG); } - start += BITS_PER_LONG; } page = virt_to_page(bdata->node_bootmem_map); _ Patches currently in -mm which might be from hannes@xxxxxxxxxxx are linux-next.patch memcg-add-mem_cgroup_replace_page_cache-to-fix-lru-issue.patch memcg-keep-root-group-unchanged-if-creation-fails.patch mm-page-writebackc-make-determine_dirtyable_memory-static-again.patch vmscan-promote-shared-file-mapped-pages.patch vmscan-activate-executable-pages-after-first-usage.patch mm-do-not-stall-in-synchronous-compaction-for-thp-allocations.patch mm-do-not-stall-in-synchronous-compaction-for-thp-allocations-v3.patch vmscan-add-task-name-to-warn_scan_unevictable-messages.patch mm-page_alloc-generalize-order-handling-in-__free_pages_bootmem.patch memcg-make-mem_cgroup_split_huge_fixup-more-efficient.patch memcg-fix-pgpgin-pgpgout-documentation.patch mm-page_cgroup-check-page_cgroup-arrays-in-lookup_page_cgroup-only-when-necessary.patch page_cgroup-add-helper-function-to-get-swap_cgroup-cleanup.patch memcg-clean-up-soft_limit_tree-if-allocation-fails.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html