Re: how to tweak kernel to get the best out of kvm?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/10/2010 02:58 PM, Harald Dunkel wrote:
Hi Avi,

On 03/08/10 12:02, Avi Kivity wrote:
On 03/05/2010 05:20 PM, Harald Dunkel wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi folks,

Problem: My kvm server (8 cores, 64 GByte RAM, amd64) can eat up
all block device or file system performance, so that the kvm clients
become almost unresponsive. This is _very_ bad. I would like to make
sure that the kvm clients do not affect each other, and that all
(including the server itself) get a fair part of computing power and
memory space.

Please describe the issue in detail, provide output from 'vmstat' and
'top'.

Sorry for the delay. I cannot put these services at risk, so I
have setup a test environment on another host (2 quadcore Xeons,
ht enabled, 32 GByte RAM, no swap, bridged networking) to
reproduce the problem.

There are 8 virtual hosts, each with a single CPU, 1 GByte RAM
and 4 GByte swap on a virtual disk. The virtual disks are image
files in the local file system. These images are not shared.

For testing each virtual host builds the Linux kernel. In
parallel I am running rsync to clone a remote virtual machine
(22 GByte) to the local physical disk.

Attached you can find the requested logs. The kern.log shows
the problem: The virtual CPUs get stuck (as it seems). Several
virtual hosts showed this effect. One vhost was unresponsive
for more than 30 minutes.

Surely this is a stress test, but I had a similar effect with
our virtual mail server on the production system, while I
was running a similar rsync session. mailhost was unresponsive
for more than 2 minutes, then it was back. The other 8 virtual
hosts on this system were started, but idle (AFAICT).


You have tons of iowait time, indicating an I/O bottleneck.

What filesystem are you using for the host? Are you using qcow2 or raw access? What's the qemu command line.

Perhaps your filesystem doesn't perform well on synchronous writes. For testing only, you might try cache=writeback.

BTW, please note that free memory goes down over time. This
happens only if the rsync is running. Without rsync the free
memory is stable.

That's expected. rsync fills up the guest and host pagecache, both drain free memory (the guest only until it has touched all of its memory).

What config options would you suggest to build and run a Linux
kernel optimized for running kvm clients?

Sorry for asking, but AFAICS some general guidelines for kvm are
missing here. Of course I saw a lot of options in Documentation/\
kernel-parameters.txt, but unfortunately I am not a kernel hacker.

Any helpful comment would be highly appreciated.

One way to ensure guests don't affect each other is not to overcommit,
that is make sure each guest gets its own cores, there is enough memory
for all guests, and guests have separate disks.  Of course that defeats
the some of the reasons for virtualizing in the first place; but if you
share resources, some compromises must be made.

How many virtual machines would you assume I could run on a
host with 64 GByte RAM, 2 quad cores, a bonding NIC with
4*1Gbit/sec and a hardware RAID? Each vhost is supposed to
get 4 GByte RAM and 1 CPU.

15 guests should fit comfortably, more with ksm running if the workloads are similar, or if you use ballooning.

If you do share resources, then Linux manages how they are shared.  The
scheduler will share the processors, the memory management subsystem
will share memory, and the I/O scheduler will share disk bandwidth.  If
you see a problem in one of these areas you will need to tune the
subsystem that is misbehaving.

Do you think that the bridge connecting the tunnel devices and
the real NIC makes the problems? Is there also a subsystem managing
network access?

Here the problem is likely the host filesystem and/or I/O scheduler.

The optimal layout is placing guest disks in LVM volumes, and accessing them with -drive file=...,cache=none. However, file-based access should also work.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux