Memory fragmentation and kvm_alloc_stage2_pgd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



kvm_alloc_stage2_pgd has to do an order 9 allocation, ie. 512
contiguous pages I think.

This often leads to problems running qemu when memory is relatively
low -- eg. if you have one VM running, a healthy number of host
applications, and perhaps "just" 4GB free; then you decide to run the
libguestfs test suite.

Any suggestions how to deal with this?

Rich.

NB: this is not the memory leak discussed previously -- that was fixed
by applying:

4f853a714bf16338ff5261128e6c7ae2569e9505
arm/arm64: KVM: Fix and refactor unmap_range

----------------------------------------------------------------------
[268578.291005] qemu-system-aar: page allocation failure: order:9, mode:0xd0
[268578.297817] CPU: 3 PID: 26187 Comm: qemu-system-aar Not tainted 3.16.0-0.rc7.git4.1.rwmj4.fc22.aarch64 #1
[268578.307451] Call trace:
[268578.309982] [<ffffffc000088d10>] dump_backtrace+0x0/0x174
[268578.315446] [<ffffffc000088ea0>] show_stack+0x1c/0x28
[268578.320589] [<ffffffc0007d4ae0>] dump_stack+0x80/0xac
[268578.325736] [<ffffffc0001b4744>] warn_alloc_failed+0xd8/0x140
[268578.331551] [<ffffffc0001b9730>] __alloc_pages_nodemask+0x914/0xb90
[268578.337900] [<ffffffc0001b99e0>] __get_free_pages+0x34/0xac
[268578.343541] [<ffffffc0000a1a50>] kvm_alloc_stage2_pgd+0x2c/0x88
[268578.349549] [<ffffffc00009eb94>] kvm_arch_init_vm+0x28/0xa0
[268578.355189] [<ffffffc00009b550>] kvm_dev_ioctl+0xfc/0x4e8
[268578.360675] [<ffffffc00023a390>] do_vfs_ioctl+0x364/0x5a4
[268578.366170] [<ffffffc00023a65c>] SyS_ioctl+0x8c/0xa4
[268578.371198] Mem-Info:
[268578.373546] DMA per-cpu:
[268578.376177] CPU    0: hi:  186, btch:  31 usd:   0
[268578.381034] CPU    1: hi:  186, btch:  31 usd:   0
[268578.385911] CPU    2: hi:  186, btch:  31 usd:  19
[268578.390767] CPU    3: hi:  186, btch:  31 usd:   0
[268578.395643] CPU    4: hi:  186, btch:  31 usd:   0
[268578.400499] CPU    5: hi:  186, btch:  31 usd:   0
[268578.405353] CPU    6: hi:  186, btch:  31 usd:   0
[268578.410231] CPU    7: hi:  186, btch:  31 usd:   0
[268578.415091] Normal per-cpu:
[268578.417983] CPU    0: hi:  186, btch:  31 usd:   0
[268578.422837] CPU    1: hi:  186, btch:  31 usd:   0
[268578.427724] CPU    2: hi:  186, btch:  31 usd:  42
[268578.432581] CPU    3: hi:  186, btch:  31 usd:   0
[268578.437458] CPU    4: hi:  186, btch:  31 usd:   0
[268578.442315] CPU    5: hi:  186, btch:  31 usd:   0
[268578.447196] CPU    6: hi:  186, btch:  31 usd:   0
[268578.452053] CPU    7: hi:  186, btch:  31 usd:   0
[268578.456936] active_anon:905569 inactive_anon:179367 isolated_anon:0
 active_file:1343354 inactive_file:1343154 isolated_file:0
 unevictable:0 dirty:41358 writeback:0 unstable:0
 free:25107 slab_reclaimable:262491 slab_unreclaimable:31308
 mapped:22452 shmem:74 pagetables:2859 bounce:0
 free_cma:3995
[268578.490794] DMA free:67996kB min:4100kB low:5124kB high:6148kB active_anon:851792kB inactive_anon:261100kB active_file:1333008kB inactive_file:1332204kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:4192256kB managed:4163884kB mlocked:0kB dirty:43304kB writeback:0kB mapped:24516kB shmem:100kB slab_reclaimable:268248kB slab_unreclaimable:32288kB kernel_stack:672kB pagetables:2308kB unstable:0kB bounce:0kB free_cma:15980kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
[268578.534199] lowmem_reserve[]: 0 12036 12036
[268578.538534] Normal free:32168kB min:12136kB low:15168kB high:18204kB active_anon:2770484kB inactive_anon:456368kB active_file:4040668kB inactive_file:4040412kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:12582912kB managed:12325816kB mlocked:0kB dirty:122128kB writeback:0kB mapped:65292kB shmem:196kB slab_reclaimable:781716kB slab_unreclaimable:92944kB kernel_stack:2768kB pagetables:9128kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
[268578.582548] lowmem_reserve[]: 0 0 0
[268578.586178] DMA: 2083*4kB (UEMC) 646*8kB (UEMR) 612*16kB (UEMR) 287*32kB (UEMRC) 148*64kB (UMR) 63*128kB (UM) 7*256kB (UM) 2*512kB (RC) 1*1024kB (C) 1*2048kB (C) 3*4096kB (C) = 68188kB
[268578.602896] Normal: 2836*4kB (UEM) 121*8kB (UEM) 121*16kB (UEMR) 89*32kB (UEMR) 39*64kB (UEMR) 23*128kB (UEMR) 13*256kB (UEM) 5*512kB (UM) 2*1024kB (MR) 1*2048kB (R) 0*4096kB = 32520kB
[268578.619619] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[268578.628148] 2687070 total pagecache pages
[268578.632228] 123 pages in swap cache
[268578.635798] Swap cache stats: add 591, delete 468, find 98/134
[268578.641690] Free swap  = 16775380kB
[268578.645245] Total swap = 16777212kB
[268578.648834] 4193792 pages RAM
[268578.651878] 0 pages HighMem/MovableOnly
[268578.655805] 64274 pages reserved

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine.  Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm




[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux