Re: Memory leaks in virtio drivers?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 26, 2010 at 8:16 PM, Freddie Cash <fjwcash@xxxxxxxxx> wrote:
> On Fri, Nov 26, 2010 at 12:04 PM, Stefan Hajnoczi <stefanha@xxxxxxxxx> wrote:
>> On Fri, Nov 26, 2010 at 7:19 PM, Freddie Cash <fjwcash@xxxxxxxxx> wrote:
>>> Within 2 weeks of booting, the host machine is using 2 GB of swap, and
>>> disk I/O wait is through the roof.  Restarting all of the VMs will
>>> free up RAM, but restarting the whole box is the only way to get
>>> performance back up.
>>>
>>> A guest configured to use 8 GB of RAM will have 9 GB virt and 7.5 GB
>>> res shown in top.  In fact, every single VM shows virt above the limit
>>> set for the VM.  Usually by close to 25%.
>>
>> Not sure about specific known issues with those Debian package versions, but...
>>
>> Virtual memory does not mean much.  For example, a 64-bit process can
>> map in 32 GB and never touch it.  The virt number will be >32 GB but
>> actually no RAM is being used.  Or it could be a memory mapped file,
>> which is backed by the disk and can pages can dropped if physical
>> memory runs low.  Looking at the virtual memory figure is not that
>> useful.
>>
>> Also remember that qemu-kvm itself requires memory to perform the
>> device emulation and virtualization.  If you have an 8 GB VM, plan for
>> more than 8 GB to be used.  Clearly this memory overhead should be
>> kept low, is your 25% virtual memory overhead figure from a small VM
>> because 9 GB virtual / 8 GB VM is 12.5% not 25%?
>>
>> What is the sum of all VMs' RAM?  I'm guessing you may have
>> overcommitted resources (e.g. 2 x 8 GB VM on a 16 GB machine).  If you
>> don't leave host Linux system some resources you will get bad VM
>> performance.
>
> Nope, not overcommitted.  Sum of RAM for all VMs (in MB):
> 512 + 768 + 1024 + 512 + 512 + 1024 +  1024 + 768 + 8192 = 14226
> Leaving a little under 2 GB for the host.

How do those VM RAM numbers stack up with ps -eo rss,args | grep kvm?

If the rss reveals the qemu-kvm processes are >15 GB RAM then it might
be worth giving them more breathing room.

> Doing further googling, could it be a caching issue in the host?  We
> currently have no cache= settings for any of our virtual disks.  I
> believe the default is still write-through? so the host is trying to
> cache everything.

Yes, the default is writethrough.  cache=none would reduce buffered
file pages so it's worth a shot.

> Anyone know how to force libvirt to use cache='none' in the <driver>
> block?  libvirt-bin 0.8.3 and virt-manager 0.8.4 ignore it if I edit
> the domain.xml file directly, and there's nowhere to set it in the
> virt-manager GUI.  (Only 1 of the VMs is managed via libvirt
> currently.)

A hack you can do if your libvirt does not support the <driver
cache="none"/> attribute is to move /usr/bin/qemu-kvm out of the way
and replace it with a shell script that does
s/if=virtio/if=virtio,cache=none/ on its arguments before invoking the
real /usr/bin/qemu-kvm.  (Perhaps the cleaner way is editing the
domain XML for <emulator>/usr/bin/kvm_cache_none.sh</emulator> but I
haven't tested it.)

Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux