On Thu, Apr 22, 2021 at 12:44:37AM +0000, Alexey Makhalov wrote: > Current implementation of percpu allocator uses total possible number of CPUs (nr_cpu_ids) to > get number of units to allocate per chunk. Every alloc_percpu() request of N bytes will allocate > N*nr_cpu_ids bytes even if the number of present CPUs is much less. Percpu allocator grows by > number of chunks keeping number of units per chunk constant. This is done in that way to > simplify CPU hotplug/remove to have per-cpu area preallocated. > > Problem: This behavior can lead to inefficient memory usage for big server machines and VMs, > where nr_cpu_ids is huge. > > Example from my experiment: > 2 vCPU VM with hotplug support (up to 128): Maybe I'm missing something, but I find the setup very strange. Who needs a 2 cpu machine which *maybe* can be extended to be a 128 CPUs machine on the fly? > [ 0.105989] smpboot: Allowing 128 CPUs, 126 hotplug CPUs > By creating huge amount of active or/and dying memory cgroups, I can generate active percpu > allocations of 100 MB (per single CPU) including fragmentation overhead. But in that case total > percpu memory consumption (reported in /proc/meminfo) will be 12.8 GB. BTW, chunks are > filled by ~75% in my experiment, so fragmentation is not a concern. > Out of 12.8 GB: > - 0.2 GB are actually used by present vCPUs, and > - 12.6 GB are "wasted"! > > I've seen production VMs consuming 16-20 GB of memory by Percpu. Roman reported 100 GB. My case is completely different and has nothing to do with this problem: the machine had a huge number of outstanding percpu allocations, caused by another problem. > There are solutions to reduce "wasted" memory overhead such as: disabling CPU hotplug; reducing > number of maximum CPUs reported by hypervisor or/and firmware; using possible_cpus= kernel > parameter. But it won't eliminate fundamental issue with "wasted" memory. > > Suggestion: To support percpu chunks scaling by number of units there. To allocate/deallocate new > units for existing chunks on CPU hotplug/remove event. I guess most of users don't have this problem because the number of possible cpus and the actual number of cpus are usually equal or not that different. Someone who really depends on a such setup can try implementing it, but I'm not sure it's trivial/possible to do without adding an overhead for the majority of users.