slow guest performance with build load, looking for ideas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have been trying to test qemu-kvm virtual machines under an IO load.
The IO load is quite simple: A timed build of the linux kernel and modules.
I have found that virtual machines take more than twice as long to do this
build as the host.  It doesn't seem to matter if I use virtio or not,  Using
the same device and same filesystem, the host is more than twice as fast.

We're hoping that we can get some advice on how to address this issue.  If
there are any options I should add for our testing, we'd appreciate it.  I'm
also game to try development bits to see if they make a difference.  If it
turns out "that is just the way it is right now", we'd like to know that
too.

For these tests, I used Fedora 11 as the virtualization server.  I did this
because it has recent bits.  I experimented with SLES11 and Fedora11 guests.

In general, I used virt-manager to do the setup and launching.  So the
qemu-kvm command lines are based on that (and this explains why they are
a bit long).  I then modified the qemu-kvm command line to perform other
variations of the test.  Example command lines can be found at the end of
this message.

I performed tests on two different systems to be sure it isn't related to
specific hardware.

------------------
------------------
kernel/sw versions
------------------
------------------
virt host (always fedora 11): 2.6.29.4-167.fc11.x86_64
guest (same as above for fedora 11 guests, SLES 11 GA kernel for SLES guests)
qemu-kvm: qemu-kvm-0.10.4-4.fc11.x86_64
libvirt: libvirt-0.6.2-11.fc11.x86_64

----------------
----------------
Test description
----------------
----------------
The test I ran in different scenarios was always the same:
Running a build of the linux kernel and modules and timing the result.
I decided on this test because we tend to make build servers out of new
hardware and software releases to help put them through their paces.

In all cases, the work area used was on a device separate from the root.
A disk device was always feed for qemu-kvm to use entirely.  The roots were
disk images but the workarea was always a fully imported device.  One exception
were a couple test runs using nfs from the host mounted on the guest.

The test build filesystem was always ext3 (except for the case of
nfs-from-host, where it was ext3 on the host and nfs on the guest).  The
filesystem was simply mounted by hand with the mount command and no special
options.

The run would look something like this... Setup:
 $ cd /work/erikj/linux-2.6.29.4
 $ cp arch/x86/configs/x86_64_defconfig .config
 $ make oldconfig
 $ make -j12  [ but not counted in the test results ]

The part of the test repeated for each run
 $ make -j12 clean
 $ time (make -j12 && make -j12 modules)   # represents posted results

The results from the above timing are what are pasted in the results.

------------------
------------------
Testing on host 1:
------------------
------------------
Host distro: Fedora 11
Guest distro: Fedora 11 and SLES11
8 vcpus provided to guest, 2048 megabytes of memory

Virtualization host system information:
System type: SGI Altix XE 310, Supermicro X7DBT mainboard
Memory: 4 GB, DDR2, 667 MHz
CPUs: 8 core, Xeon 2.33GHz, 4096 KB cache size
disk 1 (root, 50gb part): HDS725050KLA360  (500gb, 7200 rpm, SATA, 8.5ms seek)
disk 2 (work area): HDT722525DLA380 (250GB, 7200 rpm, SATA, 8.5ms seek)

fedora11 host, no guest (baseline)
-----------------------
  -> real  10m38.116s  43m25.553s  11m29.004s

fedora11 host, sles11 guest
---------------------------
 virtio, work area imported as a full device (not nfs)
  -> real  26m2.004s  user  99m29.177s  sys   30m31.586s

 virtio for root but workarea nfs-mounted from host
  -> real  68m37.306s  user  76m0.445s  sys   67m17.888s

fedora11 host, fedora11 guest
-----------------------------
 IDE emulation, no virtio, workarea device fully imported to guest for workara
  -> real  29m47.249s  user  59m1.583s  sys   41m34.281s

 Same as above, but with qemu cache=none parameter
  -> real  26m1.668s  user  66m14.812s  sys   46m21.366s

 virtio devices, device fully imported to guest for workarea, cache=none
  -> real  23m28.397s  user  68m27.730s  sys   47m50.256s

 Didn't do NFS testing in this scenario.


------------------
------------------
Testing on host 2:
------------------
------------------
Host distro: Fedora 11
Guest distro: Fedora 11
8 vcpus provided to guest, 4096 megabytes of memory

System type: SGI Altix XE XE250, Supermicro X7DWN+ main board
Memory:8 1gb DDR2 667MHz DIMMs
CPUs: 8 Intel Xeon X5460, 3.16 GHz, 6144 KB cache
disk1: LSI MegaRAID volume, 292gb, but root slice used is only 25gb
disk2: LSI MegaRAID volume, 100gb, full space used for build work area

fedora11 host, no guest (baseline)
-----------------------
 -> real  6m25.008s   user  30m54.697s   sys   8m17.359s

fedora11 host, fedora11 guest
-----------------------------
  virtio, no cache= parameter supplied to qemu:
  -> real  19m46.770s   user  52m33.523s   sys   42m55.202s

  virtio guest, qemu cache=none parameter supplied:
  -> real  18m17.690s   user  51m3.223s   sys   41m22.047s

  IDE emulation , no cache parameter:
  -> real  22m41.472s   user  44m48.190s   sys   38m3.750s

  IDE emulation, qemu cache=none parameter supplied:
  -> real  19m53.111s   user  48m48.342s   sys  40m19.469s

---------------------------------------------
---------------------------------------------
Example qemu-kvm command lines for the tests:
---------------------------------------------
---------------------------------------------
virtio, no cache= parameter supplied to qemu:
Note: This is is also exactly the command that libvirt ran

/usr/bin/qemu-kvm -S -M pc -m 4096 -smp 8 -name f11-test \
  -uuid b7b4b7e4-9c07-22aa-0c95-d5c8a24176c5 -monitor pty \
  -pidfile /var/run/libvirt/qemu//f11-test.pid -boot c \
  -drive file=,if=ide,media=cdrom,index=2 \
  -drive file=/var/lib/libvirt/images/f11-test.img,if=virtio,index=0,boot=on \
  -drive file=/dev/sdb,if=virtio,index=1 \
  -net nic,macaddr=54:52:00:46:48:0e,vlan=0,model=virtio \
  -net tap,fd=20,script=,vlan=0,ifname=vnet0 -serial pty -parallel none -usb \
  -usbdevice tablet -vnc 127.0.0.1:0 -soundhw es1370


virtio guest, qemu cache=none parameter supplied:
Note: Command modified so that running qemu by hand worked including setting
up a tun interface for the network bridge to work right outside of libvirt.
True with the following command lines too.

/usr/bin/qemu-kvm -M pc -m 4096 -smp 8 -name f11-test \
  -uuid b7b4b7e4-9c07-22aa-0c95-d5c8a24176c5 -monitor pty \
  -pidfile /var/run/libvirt/qemu//f11-test.pid -boot c \
  -drive file=,if=ide,media=cdrom,index=2 \
  -drive file=/var/lib/libvirt/images/f11-test.img,if=virtio,index=0,boot=on \
  -drive file=/dev/sdb,if=virtio,cache=none,index=1 \
  -net nic,macaddr=54:52:00:46:48:0e,vlan=0,model=virtio \
  -net tap,script=no,vlan=0,ifname=tap0 -serial pty -parallel none -usb \
  -usbdevice tablet -soundhw es1370

IDE emulation, no cache parameter:
/usr/bin/qemu-kvm -M pc -m 4096 -smp 8 -name f11-test \
  -uuid b7b4b7e4-9c07-22aa-0c95-d5c8a24176c5 -monitor pty \
  -pidfile /var/run/libvirt/qemu//f11-test.pid -boot c \
  -drive file=,if=ide,media=cdrom,index=2 \
  -drive file=/var/lib/libvirt/images/f11-test.img,if=virtio,index=0,boot=on \
  -drive file=/dev/sdb,if=ide,index=1 \
  -net nic,macaddr=54:52:00:46:48:0e,vlan=0,model=virtio \
  -net tap,script=no,vlan=0,ifname=tap0 -serial pty -parallel none -usb \
  -usbdevice tablet -soundhw es1370

IDE emulation, qemu cache=none parameter supplied:
/usr/bin/qemu-kvm -M pc -m 4096 -smp 8 -name f11-test \
  -uuid b7b4b7e4-9c07-22aa-0c95-d5c8a24176c5 -monitor pty \
  -pidfile /var/run/libvirt/qemu//f11-test.pid -boot c \
  -drive file=,if=ide,media=cdrom,index=2 \
  -drive file=/var/lib/libvirt/images/f11-test.img,if=virtio,index=0,boot=on \
  -drive file=/dev/sdb,if=ide,cache=none,index=1 \
  -net nic,macaddr=54:52:00:46:48:0e,vlan=0,model=virtio \
  -net tap,script=no,vlan=0,ifname=tap0 -serial pty -parallel none -usb \
  -usbdevice tablet -soundhw es1370
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux