The patch titled Subject: mm: page_alloc: generalize order handling in __free_pages_bootmem() has been removed from the -mm tree. Its filename was mm-page_alloc-generalize-order-handling-in-__free_pages_bootmem.patch This patch was dropped because an updated version will be merged The current -mm tree may be found at http://userweb.kernel.org/~akpm/mmotm/ ------------------------------------------------------ From: Johannes Weiner <hannes@xxxxxxxxxxx> Subject: mm: page_alloc: generalize order handling in __free_pages_bootmem() __free_pages_bootmem() used to special-case higher-order frees to save individual page checking with free_pages_bulk(). Nowadays, both zero order and non-zero order frees use free_pages(), which checks each individual page anyway, and so there is little point in making the distinction anymore. The higher-order loop will work just fine for zero order pages. Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Uwe Kleine-König <u.kleine-koenig@xxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 38 +++++++++++++------------------------- 1 file changed, 13 insertions(+), 25 deletions(-) diff -puN mm/page_alloc.c~mm-page_alloc-generalize-order-handling-in-__free_pages_bootmem mm/page_alloc.c --- a/mm/page_alloc.c~mm-page_alloc-generalize-order-handling-in-__free_pages_bootmem +++ a/mm/page_alloc.c @@ -743,36 +743,24 @@ static void __free_pages_ok(struct page local_irq_restore(flags); } -/* - * permit the bootmem allocator to evade page validation on high-order frees - */ void __meminit __free_pages_bootmem(struct page *page, unsigned int order) { - if (order == 0) { - __ClearPageReserved(page); - set_page_count(page, 0); - set_page_refcounted(page); - __free_page(page); - } else { - int loop; - unsigned int nr_pages = 1 << order; - unsigned int loop; - - prefetchw(page); - for (loop = 0; loop < nr_pages; loop++) { - struct page *p = &page[loop]; - - if (loop + 1 < nr_pages) - prefetchw(p + 1); - __ClearPageReserved(p); - set_page_count(p, 0); - } + unsigned int nr_pages = 1 << order; + unsigned int loop; - set_page_refcounted(page); - __free_pages(page, order); + prefetchw(page); + for (loop = 0; loop < nr_pages; loop++) { + struct page *p = &page[loop]; + + if (loop + 1 < nr_pages) + prefetchw(p + 1); + __ClearPageReserved(p); + set_page_count(p, 0); } -} + set_page_refcounted(page); + __free_pages(page, order); +} /* * The order of subdivision here is critical for the IO subsystem. _ Patches currently in -mm which might be from hannes@xxxxxxxxxxx are linux-next.patch memcg-add-mem_cgroup_replace_page_cache-to-fix-lru-issue.patch memcg-keep-root-group-unchanged-if-creation-fails.patch mm-page-writebackc-make-determine_dirtyable_memory-static-again.patch vmscan-promote-shared-file-mapped-pages.patch vmscan-activate-executable-pages-after-first-usage.patch mm-do-not-stall-in-synchronous-compaction-for-thp-allocations.patch mm-do-not-stall-in-synchronous-compaction-for-thp-allocations-v3.patch vmscan-add-task-name-to-warn_scan_unevictable-messages.patch memcg-make-mem_cgroup_split_huge_fixup-more-efficient.patch memcg-fix-pgpgin-pgpgout-documentation.patch mm-page_cgroup-check-page_cgroup-arrays-in-lookup_page_cgroup-only-when-necessary.patch page_cgroup-add-helper-function-to-get-swap_cgroup-cleanup.patch memcg-clean-up-soft_limit_tree-if-allocation-fails.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html