Let's count the number of CMA pages per zone and print them in /proc/zoneinfo. Having access to the total number of CMA pages per zone is helpful for debugging purposes to know where exactly the CMA pages ended up, and to figure out how many pages of a zone might behave differently, even after some of these pages might already have been allocated. As one example, CMA pages part of a kernel zone cannot be used for ordinary kernel allocations but instead behave more like ZONE_MOVABLE. For now, we are only able to get the global nr+free cma pages from /proc/meminfo and the free cma pages per zone from /proc/zoneinfo. Example after this patch when booting a 6 GiB QEMU VM with "hugetlb_cma=2G": # cat /proc/zoneinfo | grep cma cma 0 nr_free_cma 0 cma 0 nr_free_cma 0 cma 524288 nr_free_cma 493016 cma 0 cma 0 # cat /proc/meminfo | grep Cma CmaTotal: 2097152 kB CmaFree: 1972064 kB Note: We track/print only with CONFIG_CMA; "nr_free_cma" in /proc/zoneinfo is currently also printed without CONFIG_CMA. Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: "Peter Zijlstra (Intel)" <peterz@xxxxxxxxxxxxx> Cc: Mike Rapoport <rppt@xxxxxxxxxx> Cc: Oscar Salvador <osalvador@xxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Wei Yang <richard.weiyang@xxxxxxxxxxxxxxxxx> Cc: linux-api@xxxxxxxxxxxxxxx Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> --- v1 -> v2: - Print/track only with CONFIG_CMA - Extend patch description --- include/linux/mmzone.h | 6 ++++++ mm/page_alloc.c | 1 + mm/vmstat.c | 5 +++++ 3 files changed, 12 insertions(+) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index ae588b2f87ef..27d22fb22e05 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -503,6 +503,9 @@ struct zone { * bootmem allocator): * managed_pages = present_pages - reserved_pages; * + * cma pages is present pages that are assigned for CMA use + * (MIGRATE_CMA). + * * So present_pages may be used by memory hotplug or memory power * management logic to figure out unmanaged pages by checking * (present_pages - managed_pages). And managed_pages should be used @@ -527,6 +530,9 @@ struct zone { atomic_long_t managed_pages; unsigned long spanned_pages; unsigned long present_pages; +#ifdef CONFIG_CMA + unsigned long cma_pages; +#endif const char *name; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b031a5ae0bd5..9a82375bbcb2 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2168,6 +2168,7 @@ void __init init_cma_reserved_pageblock(struct page *page) } adjust_managed_page_count(page, pageblock_nr_pages); + page_zone(page)->cma_pages += pageblock_nr_pages; } #endif diff --git a/mm/vmstat.c b/mm/vmstat.c index 7758486097f9..957680db41fa 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1650,6 +1650,11 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat, zone->spanned_pages, zone->present_pages, zone_managed_pages(zone)); +#ifdef CONFIG_CMA + seq_printf(m, + "\n cma %lu", + zone->cma_pages); +#endif seq_printf(m, "\n protection: (%ld", -- 2.29.2