From: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> pageset_set_high_and_batch() and percpu_pagelist_fraction_sysctl_handler() both do the same calculation for establishing pcp->high: high = zone->managed_pages / percpu_pagelist_fraction; pageset_set_high_and_batch() also knows when it should be using the sysctl-provided value or the boot-time default behavior. There's no reason to keep percpu_pagelist_fraction_sysctl_handler()'s copy separate. So, consolidate them. The only bummer here is that pageset_set_high_and_batch() is currently __meminit. So, axe that and make it available at runtime. Signed-off-by: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> --- linux.git-davehans/mm/page_alloc.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff -puN mm/page_alloc.c~consolidate-percpu_pagelist_fraction-code mm/page_alloc.c --- linux.git/mm/page_alloc.c~consolidate-percpu_pagelist_fraction-code 2013-10-15 09:57:06.143624213 -0700 +++ linux.git-davehans/mm/page_alloc.c 2013-10-15 09:57:06.148624435 -0700 @@ -4183,7 +4183,7 @@ static void pageset_setup_from_high_mark pageset_update(&p->pcp, high, batch); } -static void __meminit pageset_set_high_and_batch(struct zone *zone, +static void pageset_set_high_and_batch(struct zone *zone, struct per_cpu_pageset *pcp) { if (percpu_pagelist_fraction) @@ -5785,14 +5785,10 @@ int percpu_pagelist_fraction_sysctl_hand return ret; mutex_lock(&pcp_batch_high_lock); - for_each_populated_zone(zone) { - unsigned long high; - high = zone->managed_pages / percpu_pagelist_fraction; + for_each_populated_zone(zone) for_each_possible_cpu(cpu) - pageset_setup_from_high_mark( - per_cpu_ptr(zone->pageset, cpu), - high); - } + pageset_set_high_and_batch(zone, + per_cpu_ptr(zone->pageset, cpu)); mutex_unlock(&pcp_batch_high_lock); return 0; } _ -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>