I'm not sure where exactly to raise this issue, but this may be as good as anywhere. On my aarch64 hardware which has 16 GB of physical RAM, I can run about 3000 KVM guests (serially) after which memory gets so low that I cannot run any more. After reboot: $ free total used free shared buffers cached Mem: 16721856 5592448 11129408 7296 152192 4874112 -/+ buffers/cache: 566144 16155712 Swap: 8388544 0 8388544 After first 1000 guests have run: $ free total used free shared buffers cached Mem: 16721856 9993280 6728576 7296 249088 4874944 -/+ buffers/cache: 4869248 11852608 Swap: 8388544 0 8388544 As a rough guide it looks as if 4303 K is leaked on each KVM instance. I'm using 3.15.0 for both host and guest kernel. The kernel has a few non-upstream patches for hardware enablement, but is very close to upstream. I'm using qemu from git. The kernel uses 64 KB pages. Therefore the leak is approximately 67 pages per run. There are no large userspace processes which could account for the leak. There is nothing unusual in slabinfo / slabtop. So it would appear to be kernel memory that is allocated using __get_free_pages (ie not kmalloc). I have spent a bit of time inserting printks around memory allocations in arch/arm/kvm/mmu.c (since that file seems to be responsible for guest page tables and guest memory allocation) but I haven't come up with anything conclusive. Have you seen anything like this before? What do you think I could do to get to the bottom of this problem? Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm