Calculate a cpumask of CPUs with per-cpu pages in any zone and only send an IPI requesting CPUs to drain these pages to the buddy allocator if they actually have pages when asked to flush. The code path of memory allocation failure for CPUMASK_OFFSTACK=y config was tested using fault injection framework. Signed-off-by: Gilad Ben-Yossef <gilad@xxxxxxxxxxxxx> Acked-by: Chris Metcalf <cmetcalf@xxxxxxxxxx> CC: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> CC: Frederic Weisbecker <fweisbec@xxxxxxxxx> CC: Russell King <linux@xxxxxxxxxxxxxxxx> CC: linux-mm@xxxxxxxxx CC: Christoph Lameter <cl@xxxxxxxxxxxxxxxxxxxx> CC: Pekka Enberg <penberg@xxxxxxxxxx> CC: Matt Mackall <mpm@xxxxxxxxxxx> CC: Sasha Levin <levinsasha928@xxxxxxxxx> CC: Rik van Riel <riel@xxxxxxxxxx> CC: Andi Kleen <andi@xxxxxxxxxxxxxx> --- mm/page_alloc.c | 18 +++++++++++++++++- 1 files changed, 17 insertions(+), 1 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9dd443d..44dc6c5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1119,7 +1119,23 @@ void drain_local_pages(void *arg) */ void drain_all_pages(void) { - on_each_cpu(drain_local_pages, NULL, 1); + int cpu; + struct zone *zone; + cpumask_var_t cpus; + struct per_cpu_pageset *pageset; + + if (likely(zalloc_cpumask_var(&cpus, GFP_ATOMIC))) { + for_each_populated_zone(zone) { + for_each_online_cpu(cpu) { + pageset = per_cpu_ptr(zone->pageset, cpu); + if (pageset->pcp.count) + cpumask_set_cpu(cpu, cpus); + } + } + on_each_cpu_mask(cpus, drain_local_pages, NULL, 1); + free_cpumask_var(cpus); + } else + on_each_cpu(drain_local_pages, NULL, 1); } #ifdef CONFIG_HIBERNATION -- 1.7.0.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>