Recent KVM memory usage strangeness

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I am observing some strangeness with KVM memory usage lately. Unfortunately
hard to tell since which kernel, but if I have to guess, with 4.14 in general,
as opposed to 4.9 previously.

First, the memory usage of VMs is reported bizarrely in top:

------
KiB Mem:  16326888 total, 16016944 used,   309944 free,   118712 buffers
KiB Swap:  8388604 total,     4864 used,  8383740 free.  8797796 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND     
12813 root      20   0 3038684 0.012t   4496 S   4.6 81.9 543:07.22 kvm-a     
24701 root      20   0 1800876 0.011t   4316 R  16.9 74.4 774:30.35 kvm-b   
 6547 root      20   0 1639372 3.494g   5124 S   2.0 22.4 274:32.05 kvm-c   
16775 root      10 -10 1468384 2.715g   4376 S   1.0 17.4 757:57.86 kvm-d     
23821 root      20   0 2972160 2.036g   4284 S  18.9 13.1   3964:45 kvm-e   
 1016 www-data  20   0  183528  43196   8244 S   0.0  0.3   0:06.53 smokeping.+ 
  941 smokepi+  20   0  174020  31560   3664 S   0.0  0.2   0:18.19 /usr/sbin/+ 
----

As you can see the top two use 12GB and 11GB respectively as "Resident".
These grew steadily to this point over some time since VMs were launched.
But that usage is actually nowhere near the case, as the host machine only
has 7GB of RAM used in total:

# free -m
             total       used       free     shared    buffers     cached
Mem:         15944      15641        303         10        115       8591
-/+ buffers/cache:       6933       9010
Swap:         8191          4       8187

And the first VM listed only has 2GB of RAM allocated to it:

root     12813     1  3 Jan26 ?        09:03:09 qemu-system-x86_64 -enable-kvm
-daemonize -monitor unix:./monitor,server,nowait -name a,process=kvm-a
-smp 4 -m 2048 -device virtio-scsi-pci,id=scsi -device scsi-hd,drive=hd -drive
if=none,id=hd,cache=none,aio=native,format=raw,file=/dev/vg/vm-a,discard=unmap,detect-zeroes=off
-device virtio-net-pci,netdev=net0,mac=xx:xx:xx:xx:xx:xx -netdev
tap,vhost=on,id=net0,script=/path/to/xyz.sh -vnc [::]:1 -cpu
host,host-cache-info=on -balloon virtio -machine q35,accel=kvm

Next ones are about the same, 1 or 2 GB RAM each. One is 512MB (the one showing 2.7 GB used now...)

Secondly, I use (or try to use) KSM. At some point its efficiency dropped
like a rock. If before I had numbers like 2GB stored in 100 MB, now I only have:

/sys/kernel/mm/ksm/full_scans:1482
/sys/kernel/mm/ksm/max_page_sharing:256
/sys/kernel/mm/ksm/merge_across_nodes:1
/sys/kernel/mm/ksm/pages_shared:98528
/sys/kernel/mm/ksm/pages_sharing:104775
/sys/kernel/mm/ksm/pages_to_scan:128
/sys/kernel/mm/ksm/pages_unshared:1138639
/sys/kernel/mm/ksm/pages_volatile:184595
/sys/kernel/mm/ksm/run:1
/sys/kernel/mm/ksm/sleep_millisecs:100
/sys/kernel/mm/ksm/stable_node_chains:9
/sys/kernel/mm/ksm/stable_node_chains_prune_millisecs:2000
/sys/kernel/mm/ksm/stable_node_dups:47
/sys/kernel/mm/ksm/use_zero_pages:1

Running: 1
Sharing: 409.28 MB stored in 384.88 MB (1.06:1)

Which is somewhat strange considering 3 of the running VMs are Windows, and 2 are Debian,
shouldn't they have more "in common".

Tried with "use_zero_pages" set to 0 and 1, "1" sounded like a better idea, but it doesn't
seem to change anything.

So the question is, is there something wrong going on with KVM's RAM usage
on my system, could this be a recent bug, or have I misconfigured something.

Transparent hugepages come to mind for some reason, and I do have 
TRANSPARENT_HUGEPAGE_ALWAYS=y
But while that could explain low KSM results (could it?), it shouldn't cause these
memleak-looking readings in 'top', right?

-- 
With respect,
Roman



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux