The patch titled Subject: /proc/meminfo: add percpu populated pages count has been added to the -mm tree. Its filename is proc-add-percpu-populated-pages-count-to-meminfo.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/proc-add-percpu-populated-pages-count-to-meminfo.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/proc-add-percpu-populated-pages-count-to-meminfo.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: "Dennis Zhou (Facebook)" <dennisszhou@xxxxxxxxx> Subject: /proc/meminfo: add percpu populated pages count Currently, percpu memory only exposes allocation and utilization information via debugfs. This more or less is only really useful for understanding the fragmentation and allocation information at a per-chunk level with a few global counters. This is also gated behind a config. BPF and cgroup, for example, have seen an increase in use causing increased use of percpu memory. Let's make it easier for someone to identify how much memory is being used. This patch adds the "Percpu" stat to meminfo to more easily look up how much percpu memory is in use. This number includes the cost for all allocated backing pages and not just insight at the per a unit, per chunk level. Metadata is excluded. I think excluding metadata is fair because the backing memory scales with the numbere of cpus and can quickly outweigh the metadata. It also makes this calculation light. Link: http://lkml.kernel.org/r/20180807184723.74919-1-dennisszhou@xxxxxxxxx Signed-off-by: Dennis Zhou <dennisszhou@xxxxxxxxx> Acked-by: Tejun Heo <tj@xxxxxxxxxx> Acked-by: Roman Gushchin <guro@xxxxxx> Reviewed-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Alexey Dobriyan <adobriyan@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- Documentation/filesystems/proc.txt | 3 ++ fs/proc/meminfo.c | 2 + include/linux/percpu.h | 2 + mm/percpu.c | 29 +++++++++++++++++++++++++++ 4 files changed, 36 insertions(+) --- a/Documentation/filesystems/proc.txt~proc-add-percpu-populated-pages-count-to-meminfo +++ a/Documentation/filesystems/proc.txt @@ -870,6 +870,7 @@ Committed_AS: 100056 kB VmallocTotal: 112216 kB VmallocUsed: 428 kB VmallocChunk: 111088 kB +Percpu: 62080 kB AnonHugePages: 49152 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB @@ -959,6 +960,8 @@ Committed_AS: The amount of memory prese VmallocTotal: total size of vmalloc memory area VmallocUsed: amount of vmalloc area which is used VmallocChunk: largest contiguous block of vmalloc area which is free + Percpu: Memory allocated to the percpu allocator used to back percpu + allocations. This stat excludes the cost of metadata. .............................................................................. --- a/fs/proc/meminfo.c~proc-add-percpu-populated-pages-count-to-meminfo +++ a/fs/proc/meminfo.c @@ -7,6 +7,7 @@ #include <linux/mman.h> #include <linux/mmzone.h> #include <linux/proc_fs.h> +#include <linux/percpu.h> #include <linux/quicklist.h> #include <linux/seq_file.h> #include <linux/swap.h> @@ -121,6 +122,7 @@ static int meminfo_proc_show(struct seq_ (unsigned long)VMALLOC_TOTAL >> 10); show_val_kb(m, "VmallocUsed: ", 0ul); show_val_kb(m, "VmallocChunk: ", 0ul); + show_val_kb(m, "Percpu: ", pcpu_nr_pages()); #ifdef CONFIG_MEMORY_FAILURE seq_printf(m, "HardwareCorrupted: %5lu kB\n", --- a/include/linux/percpu.h~proc-add-percpu-populated-pages-count-to-meminfo +++ a/include/linux/percpu.h @@ -149,4 +149,6 @@ extern phys_addr_t per_cpu_ptr_to_phys(v (typeof(type) __percpu *)__alloc_percpu(sizeof(type), \ __alignof__(type)) +extern unsigned long pcpu_nr_pages(void); + #endif /* __LINUX_PERCPU_H */ --- a/mm/percpu.c~proc-add-percpu-populated-pages-count-to-meminfo +++ a/mm/percpu.c @@ -170,6 +170,14 @@ static LIST_HEAD(pcpu_map_extend_chunks) int pcpu_nr_empty_pop_pages; /* + * The number of populated pages in use by the allocator, protected by + * pcpu_lock. This number is kept per a unit per chunk (i.e. when a page gets + * allocated/deallocated, it is allocated/deallocated in all units of a chunk + * and increments/decrements this count by 1). + */ +static unsigned long pcpu_nr_populated; + +/* * Balance work is used to populate or destroy chunks asynchronously. We * try to keep the number of populated free pages between * PCPU_EMPTY_POP_PAGES_LOW and HIGH for atomic allocations and at most one @@ -1232,6 +1240,7 @@ static void pcpu_chunk_populated(struct bitmap_set(chunk->populated, page_start, nr); chunk->nr_populated += nr; + pcpu_nr_populated += nr; if (!for_alloc) { chunk->nr_empty_pop_pages += nr; @@ -1260,6 +1269,7 @@ static void pcpu_chunk_depopulated(struc chunk->nr_populated -= nr; chunk->nr_empty_pop_pages -= nr; pcpu_nr_empty_pop_pages -= nr; + pcpu_nr_populated -= nr; } /* @@ -2176,6 +2186,9 @@ int __init pcpu_setup_first_chunk(const pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages; pcpu_chunk_relocate(pcpu_first_chunk, -1); + /* include all regions of the first chunk */ + pcpu_nr_populated += PFN_DOWN(size_sum); + pcpu_stats_chunk_alloc(); trace_percpu_create_chunk(base_addr); @@ -2746,6 +2759,22 @@ void __init setup_per_cpu_areas(void) #endif /* CONFIG_SMP */ /* + * pcpu_nr_pages - calculate total number of populated backing pages + * + * This reflects the number of pages populated to back chunks. Metadata is + * excluded in the number exposed in meminfo as the number of backing pages + * scales with the number of cpus and can quickly outweigh the memory used for + * metadata. It also keeps this calculation nice and simple. + * + * RETURNS: + * Total number of populated backing pages in use by the allocator. + */ +unsigned long pcpu_nr_pages(void) +{ + return pcpu_nr_populated * pcpu_nr_units; +} + +/* * Percpu allocator is initialized early during boot when neither slab or * workqueue is available. Plug async management until everything is up * and running. _ Patches currently in -mm which might be from dennisszhou@xxxxxxxxx are proc-add-percpu-populated-pages-count-to-meminfo.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html