The patch titled Subject: mm, page_alloc: remove setup_pageset() has been added to the -mm tree. Its filename is mm-page_alloc-remove-setup_pageset.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-page_alloc-remove-setup_pageset.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-page_alloc-remove-setup_pageset.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Vlastimil Babka <vbabka@xxxxxxx> Subject: mm, page_alloc: remove setup_pageset() We initialize boot-time pagesets with setup_pageset(), which sets high and batch values that effectively disable pcplists. We can remove this wrapper if we just set these values for all pagesets in pageset_init(). Non-boot pagesets then subsequently update them to the proper values. No functional change. Link: https://lkml.kernel.org/r/20201111092812.11329-4-vbabka@xxxxxxx Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> Reviewed-by: David Hildenbrand <david@xxxxxxxxxx> Reviewed-by: Oscar Salvador <osalvador@xxxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxxx> Acked-by: Pankaj Gupta <pankaj.gupta@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) --- a/mm/page_alloc.c~mm-page_alloc-remove-setup_pageset +++ a/mm/page_alloc.c @@ -5905,7 +5905,7 @@ static void build_zonelists(pg_data_t *p * not check if the processor is online before following the pageset pointer. * Other parts of the kernel may not check if the zone is available. */ -static void setup_pageset(struct per_cpu_pageset *p); +static void pageset_init(struct per_cpu_pageset *p); static DEFINE_PER_CPU(struct per_cpu_pageset, boot_pageset); static DEFINE_PER_CPU(struct per_cpu_nodestat, boot_nodestats); @@ -5973,7 +5973,7 @@ build_all_zonelists_init(void) * (a chicken-egg dilemma). */ for_each_possible_cpu(cpu) - setup_pageset(&per_cpu(boot_pageset, cpu)); + pageset_init(&per_cpu(boot_pageset, cpu)); mminit_verify_zonelist(); cpuset_init_current_mems_allowed(); @@ -6292,12 +6292,15 @@ static void pageset_init(struct per_cpu_ pcp = &p->pcp; for (migratetype = 0; migratetype < MIGRATE_PCPTYPES; migratetype++) INIT_LIST_HEAD(&pcp->lists[migratetype]); -} -static void setup_pageset(struct per_cpu_pageset *p) -{ - pageset_init(p); - pageset_update(&p->pcp, 0, 1); + /* + * Set batch and high values safe for a boot pageset. A true percpu + * pageset's initialization will update them subsequently. Here we don't + * need to be as careful as pageset_update() as nobody can access the + * pageset yet. + */ + pcp->high = 0; + pcp->batch = 1; } /* _ Patches currently in -mm which might be from vbabka@xxxxxxx are mm-slub-use-kmem_cache_debug_flags-in-deactivate_slab.patch mm-page_alloc-clean-up-pageset-high-and-batch-update.patch mm-page_alloc-calculate-pageset-high-and-batch-once-per-zone.patch mm-page_alloc-remove-setup_pageset.patch mm-page_alloc-simplify-pageset_update.patch mm-page_alloc-cache-pageset-high-and-batch-in-struct-zone.patch mm-page_alloc-move-draining-pcplists-to-page-isolation-users.patch mm-page_alloc-disable-pcplists-during-memory-offline.patch mm-page_alloc-disable-pcplists-during-memory-offline-fix.patch