On Sun, 13 Nov 2011, Gilad Ben-Yossef wrote: > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 9dd443d..44dc6c5 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1119,7 +1119,23 @@ void drain_local_pages(void *arg) > */ > void drain_all_pages(void) > { > - on_each_cpu(drain_local_pages, NULL, 1); > + int cpu; > + struct zone *zone; > + cpumask_var_t cpus; > + struct per_cpu_pageset *pageset; We usually name such pointers "pcp" in the page allocator. > + > + if (likely(zalloc_cpumask_var(&cpus, GFP_ATOMIC))) { > + for_each_populated_zone(zone) { > + for_each_online_cpu(cpu) { > + pageset = per_cpu_ptr(zone->pageset, cpu); > + if (pageset->pcp.count) > + cpumask_set_cpu(cpu, cpus); > + } The pagesets are allocated on bootup from the per cpu areas. You may get a better access pattern by using for_each_online_cpu as the outer loop because their is a likelyhood of linear increasing accesses as you loop through the zones for a particular cpu. Acked-by: Christoph Lameter <cl@xxxxxxxxx> -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>