Re: KVM/QEMU Memory Ballooning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 17, 2020 at 09:06:51AM +0000, Sprencz, Pal Csongor (GE Healthcare) wrote:
> In short we have a ScientificLinux7 base host OS system, on top of
> that I would want to run a KVM/QEMU virtual machine.
> The kvm version is used on the host OS is the following.
> qemu-kvm-common-1.5.3-173.el7_8.1.x86_64
> libvirt-daemon-driver-qemu-4.5.0-23.el7_7.6.x86_64
> qemu-kvm-1.5.3-173.el7_8.1.x86_64
> The guest Linux OS is Suse.
> The VM cfg does contain the mem balloon device configuration.
> Inside of the VM we have a Kubernetes. The host system has 64GB, the VM
> has maxmemory and currentmemory set to 32GB.

Ok, so even with the VM running, your host has many 10's of GB of free
memory available for other tasks.

> I would like to ask you, how we can test that the memory ballooning
> mechanism works? Does it works on this version or not. What I see
> on the host level, if I put under stress the host with a hugh memory
> allocation, in that case the VM resident memory size it is decreasing,
> but it not decrease so much as it is expected, and it started to swap
> out to host swap disk. The VM on idle state it has 32GB total memory
> and 21GB free memory. And what I see the RSS size of the qemu is
> decresing with 5-6GB.

I fear you might be mis-interpreting what the balloon device actually
does.

It is a totally *manual* mechanism.  You have booted the guest with
32GB set as maxmemory and currentmemory. So initially the guest will
have its full 32GB available and no balloon driver activity will
take place.

The balloon driver will only do something  when the host administrator
*explicitly* sets a balloon target in QEMU. This is doable via the libvirt
virDomainSetMemory API / virsh  setmem command.

There is *nothing* in either libvirt or QEMU that monitors host memory
pressure, nor anything that automatically sets balloon driver targets.

It is possible to create an application that monitors host memory and
uses the libvirt APIs to set the ballon driver. That is outside the
scope of what libvirt does as a core project. There have been 3rd party
projects that try todo this though, such as oVirt's "mom" app (Memory
Overcommit Manager).

> Do you have any procedure, how it can be tested? Or there is any log
> where I can see that the memory ballooning is started to work?

So the out of the box behaviour if you have host memory pressure is
that QEMU will get pushued out to swap. You need to make sure you have
enough swap to cope with the worst case memory load on the host to
avoid risk of the OOM Killer waking up.

Finally note that QEMU's default behaviour will not make guest RAM
resident when it starts up. A guest with 32 GB of RAM configured
may only use 1 GB initially. Further pages of guest RAM only become
resident as the guest OS touches each page.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




[Index of Archives]     [Virt Tools]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux