Current implementation of percpu allocator uses total possible number of CPUs (nr_cpu_ids) to get number of units to allocate per chunk. Every alloc_percpu() request of N bytes will allocate N*nr_cpu_ids bytes even if the number of present CPUs is much less. Percpu allocator grows by number of chunks keeping number of units per chunk constant. This is done in that way to simplify CPU hotplug/remove to have per-cpu area preallocated. Problem: This behavior can lead to inefficient memory usage for big server machines and VMs, where nr_cpu_ids is huge. Example from my experiment: 2 vCPU VM with hotplug support (up to 128): [ 0.105989] smpboot: Allowing 128 CPUs, 126 hotplug CPUs By creating huge amount of active or/and dying memory cgroups, I can generate active percpu allocations of 100 MB (per single CPU) including fragmentation overhead. But in that case total percpu memory consumption (reported in /proc/meminfo) will be 12.8 GB. BTW, chunks are filled by ~75% in my experiment, so fragmentation is not a concern. Out of 12.8 GB: - 0.2 GB are actually used by present vCPUs, and - 12.6 GB are "wasted"! I've seen production VMs consuming 16-20 GB of memory by Percpu. Roman reported 100 GB. There are solutions to reduce "wasted" memory overhead such as: disabling CPU hotplug; reducing number of maximum CPUs reported by hypervisor or/and firmware; using possible_cpus= kernel parameter. But it won't eliminate fundamental issue with "wasted" memory. Suggestion: To support percpu chunks scaling by number of units there. To allocate/deallocate new units for existing chunks on CPU hotplug/remove event. Any thoughts? Thanks! --Alexey