The quilt patch titled Subject: mm: reduce deferred struct page init ifdeffery has been removed from the -mm tree. Its filename was mm-reduce-deferred-struct-page-init-ifdeffery.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> Subject: mm: reduce deferred struct page init ifdeffery Date: Fri, 9 Aug 2024 14:48:48 +0300 Patch series "mm: Fix several issues with unaccepted memory", v2. The patchset addresses several issues related to unaccepted memory. Pacth 1/7 preparatory cleanup. Patch 2/7 ensures that __alloc_pages_bulk() will not exhaust all accepted memory without accepting more. Patches 3/7-5/7 are preparations for patch 6/7, which fixes alloc_config_page() on machines with unaccepted memory. This allows, for example, the allocation of gigantic pages at runtime. Patch 7/7 enables the kernel to accept memory up to the promo watermark. This patch (of 7): Add dummy _deferred_grow_zone() for !DEFERRED_STRUCT_PAGE_INIT and remove #ifdefs in two places. No functional changes. Link: https://lkml.kernel.org/r/20240809114854.3745464-1-kirill.shutemov@xxxxxxxxxxxxxxx Link: https://lkml.kernel.org/r/20240809114854.3745464-3-kirill.shutemov@xxxxxxxxxxxxxxx Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Suggested-by: David Hildenbrand <david@xxxxxxxxxx> Cc: Borislav Petkov <bp@xxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Cc: Mike Rapoport (Microsoft) <rppt@xxxxxxxxxx> Cc: Tom Lendacky <thomas.lendacky@xxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) --- a/mm/page_alloc.c~mm-reduce-deferred-struct-page-init-ifdeffery +++ a/mm/page_alloc.c @@ -322,6 +322,11 @@ static inline bool deferred_pages_enable { return false; } + +static inline bool _deferred_grow_zone(struct zone *zone, unsigned int order) +{ + return false; +} #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ /* Return a pointer to the bitmap storing bits affecting a block of pages */ @@ -3395,7 +3400,6 @@ check_alloc_wmark: if (cond_accept_memory(zone, order)) goto try_this_zone; -#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT /* * Watermark failed for this zone, but see if we can * grow this zone if it contains deferred pages. @@ -3404,7 +3408,6 @@ check_alloc_wmark: if (_deferred_grow_zone(zone, order)) goto try_this_zone; } -#endif /* Checked here to keep the fast path fast */ BUILD_BUG_ON(ALLOC_NO_WATERMARKS < NR_WMARK); if (alloc_flags & ALLOC_NO_WATERMARKS) @@ -3450,13 +3453,11 @@ try_this_zone: if (cond_accept_memory(zone, order)) goto try_this_zone; -#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT /* Try again if zone has deferred pages */ if (deferred_pages_enabled()) { if (_deferred_grow_zone(zone, order)) goto try_this_zone; } -#endif } } _ Patches currently in -mm which might be from kirill.shutemov@xxxxxxxxxxxxxxx are