The quilt patch titled Subject: mm/page_alloc: make zone_pcp_update() static has been removed from the -mm tree. Its filename was mm-page_alloc-make-zone_pcp_update-static.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Miaohe Lin <linmiaohe@xxxxxxxxxx> Subject: mm/page_alloc: make zone_pcp_update() static Date: Fri, 16 Sep 2022 15:22:43 +0800 Since commit b92ca18e8ca5 ("mm/page_alloc: disassociate the pcp->high from pcp->batch"), zone_pcp_update() is only used in mm/page_alloc.c. Move zone_pcp_update() up to avoid forward declaration and then make it static. No functional change intended. Link: https://lkml.kernel.org/r/20220916072257.9639-3-linmiaohe@xxxxxxxxxx Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> Reviewed-by: David Hildenbrand <david@xxxxxxxxxx> Reviewed-by: Anshuman Khandual <anshuman.khandual@xxxxxxx> Reviewed-by: Oscar Salvador <osalvador@xxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/internal.h | 1 - mm/page_alloc.c | 22 +++++++++++----------- 2 files changed, 11 insertions(+), 12 deletions(-) --- a/mm/internal.h~mm-page_alloc-make-zone_pcp_update-static +++ a/mm/internal.h @@ -367,7 +367,6 @@ extern int user_min_free_kbytes; extern void free_unref_page(struct page *page, unsigned int order); extern void free_unref_page_list(struct list_head *list); -extern void zone_pcp_update(struct zone *zone, int cpu_online); extern void zone_pcp_reset(struct zone *zone); extern void zone_pcp_disable(struct zone *zone); extern void zone_pcp_enable(struct zone *zone); --- a/mm/page_alloc.c~mm-page_alloc-make-zone_pcp_update-static +++ a/mm/page_alloc.c @@ -7243,6 +7243,17 @@ void __meminit setup_zone_pageset(struct } /* + * The zone indicated has a new number of managed_pages; batch sizes and percpu + * page high values need to be recalculated. + */ +static void zone_pcp_update(struct zone *zone, int cpu_online) +{ + mutex_lock(&pcp_batch_high_lock); + zone_set_pageset_high_and_batch(zone, cpu_online); + mutex_unlock(&pcp_batch_high_lock); +} + +/* * Allocate per cpu pagesets and initialize them. * Before this call only boot pagesets were available. */ @@ -9474,17 +9485,6 @@ void free_contig_range(unsigned long pfn EXPORT_SYMBOL(free_contig_range); /* - * The zone indicated has a new number of managed_pages; batch sizes and percpu - * page high values need to be recalculated. - */ -void zone_pcp_update(struct zone *zone, int cpu_online) -{ - mutex_lock(&pcp_batch_high_lock); - zone_set_pageset_high_and_batch(zone, cpu_online); - mutex_unlock(&pcp_batch_high_lock); -} - -/* * Effectively disable pcplists for the zone by setting the high limit to 0 * and draining all cpus. A concurrent page freeing on another CPU that's about * to put the page on pcplist will either finish before the drain and the page _ Patches currently in -mm which might be from linmiaohe@xxxxxxxxxx are