Currently memmap_init_zone_device() ends up initializing 32768 pages when it only needs to initialize 128 given tail page reuse. That number is worse with 1GB compound page geometries, 262144 instead of 128. Update memmap_init_zone_device() to skip redundant initialization, detailed below. When a pgmap @geometry is set, all pages are mapped at a given huge page alignment and use compound pages to describe them as opposed to a struct per 4K. With @geometry > PAGE_SIZE and when struct pages are stored in ram (!altmap) most tail pages are reused. Consequently, the amount of unique struct pages is a lot smaller that the total amount of struct pages being mapped. The altmap path is left alone since it does not support memory savings based on compound devmap geometries. Signed-off-by: Joao Martins <joao.m.martins@xxxxxxxxxx> --- mm/page_alloc.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c1497a928005..13464b4188b4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6610,6 +6610,20 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, } } +/* + * With compound page geometry and when struct pages are stored in ram most + * tail pages are reused. Consequently, the amount of unique struct pages to + * initialize is a lot smaller that the total amount of struct pages being + * mapped. This is a paired / mild layering violation with explicit knowledge + * of how the sparse_vmemmap internals handle compound pages in the lack + * of an altmap. See vmemmap_populate_compound_pages(). + */ +static inline unsigned long compound_nr_pages(struct vmem_altmap *altmap, + unsigned long nr_pages) +{ + return !altmap ? 2 * (PAGE_SIZE/sizeof(struct page)) : nr_pages; +} + static void __ref memmap_init_compound(struct page *head, unsigned long head_pfn, unsigned long zone_idx, int nid, struct dev_pagemap *pgmap, @@ -6673,7 +6687,7 @@ void __ref memmap_init_zone_device(struct zone *zone, continue; memmap_init_compound(page, pfn, zone_idx, nid, pgmap, - pfns_per_compound); + compound_nr_pages(altmap, pfns_per_compound)); } pr_info("%s initialised %lu pages in %ums\n", __func__, -- 2.17.1