The patch titled Drain per-cpu lists when high-order allocations fail has been added to the -mm tree. Its filename is drain-per-cpu-lists-when-high-order-allocations-fail.patch *** Remember to use Documentation/SubmitChecklist when testing your code *** See http://www.zip.com.au/~akpm/linux/patches/stuff/added-to-mm.txt to find out what to do about this ------------------------------------------------------ Subject: Drain per-cpu lists when high-order allocations fail From: Mel Gorman <mel@xxxxxxxxx> Per-cpu pages can accidentally cause fragmentation because they are free, but pinned pages in an otherwise contiguous block. When this patch is applied, the per-cpu caches are drained after the direct-reclaim is entered if the requested order is greater than 0. It simply reuses the code used by suspend and hotplug. Signed-off-by: Mel Gorman <mel@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 28 +++++++++++++++++++++++++++- 1 files changed, 27 insertions(+), 1 deletion(-) diff -puN mm/page_alloc.c~drain-per-cpu-lists-when-high-order-allocations-fail mm/page_alloc.c --- a/mm/page_alloc.c~drain-per-cpu-lists-when-high-order-allocations-fail +++ a/mm/page_alloc.c @@ -901,7 +901,9 @@ void mark_free_pages(struct zone *zone) spin_unlock_irqrestore(&zone->lock, flags); } +#endif /* CONFIG_PM */ +#if defined(CONFIG_PM) || defined(CONFIG_PAGE_GROUP_BY_MOBILITY) /* * Spill all of this CPU's per-cpu pages back into the buddy allocator. */ @@ -913,7 +915,28 @@ void drain_local_pages(void) __drain_pages(smp_processor_id()); local_irq_restore(flags); } -#endif /* CONFIG_PM */ + +void smp_drain_local_pages(void *arg) +{ + drain_local_pages(); +} + +/* + * Spill all the per-cpu pages from all CPUs back into the buddy allocator + */ +void drain_all_local_pages(void) +{ + unsigned long flags; + + local_irq_save(flags); + __drain_pages(smp_processor_id()); + local_irq_restore(flags); + + smp_call_function(smp_drain_local_pages, NULL, 0, 1); +} +#else +void drain_all_local_pages(void) {} +#endif /* CONFIG_PM || CONFIG_PAGE_GROUP_BY_MOBILITY */ /* * Free a 0-order page @@ -1490,6 +1513,9 @@ nofail_alloc: cond_resched(); + if (order != 0) + drain_all_local_pages(); + if (likely(did_some_progress)) { page = get_page_from_freelist(gfp_mask, order, zonelist, alloc_flags); _ Patches currently in -mm which might be from mel@xxxxxxxxx are add-a-bitmap-that-is-used-to-track-flags-affecting-a-block-of-pages.patch add-__gfp_movable-for-callers-to-flag-allocations-from-high-memory-that-may-be-migrated.patch add-__gfp_movable-for-callers-to-flag-allocations-from-low-memory-that-may-be-migrated.patch split-the-free-lists-for-movable-and-unmovable-allocations.patch choose-pages-from-the-per-cpu-list-based-on-migration-type.patch add-a-configure-option-to-group-pages-by-mobility.patch drain-per-cpu-lists-when-high-order-allocations-fail.patch move-free-pages-between-lists-on-steal.patch group-short-lived-and-reclaimable-kernel-allocations.patch group-high-order-atomic-allocations.patch bias-the-placement-of-kernel-pages-at-lower-pfns.patch be-more-agressive-about-stealing-when-migrate_reclaimable-allocations-fallback.patch create-the-zone_movable-zone.patch allow-huge-page-allocations-to-use-gfp_high_movable.patch x86-specify-amount-of-kernel-memory-at-boot-time.patch ppc-and-powerpc-specify-amount-of-kernel-memory-at-boot-time.patch x86_64-specify-amount-of-kernel-memory-at-boot-time.patch ia64-specify-amount-of-kernel-memory-at-boot-time.patch add-documentation-for-additional-boot-parameter-and-sysctl.patch ext2-reservations.patch add-__gfp_movable-for-callers-to-flag-allocations-from-high-memory-that-may-be-migrated-swap-prefetch.patch add-debugging-aid-for-memory-initialisation-problems.patch add-debugging-aid-for-memory-initialisation-problems-fix.patch - To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html