virtio-blk performance regression and qemu-kvm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Recently I observed performance regression regarding virtio-blk,
especially different IO bandwidths between qemu-kvm 0.14.1 and 1.0.
So I want to share the benchmark results, and ask you what the reason
would be.

1. Test condition

 - On host, ramdisk-backed block device (/dev/ram0)
 - qemu-kvm is configured with virtio-blk driver for /dev/ram0,
   which is detected as /dev/vdb inside the guest VM.
 - Host System: Ubuntu 11.10 / Kernel 3.2
 - Guest System: Debian 6.0 / Kernel 3.0.6
 - Host I/O scheduler : deadline
 - testing tool : fio

2. Raw performance on the host

 If we test I/O with fio on /dev/ram0 on the host,

 - Sequential read (on the host)
  # fio -name iops -rw=read -size=1G -iodepth 1 \
   -filename /dev/ram0 -ioengine libaio -direct=1 -bs=4096

 - Sequential write (on the host)
  # fio -name iops -rw=write -size=1G -iodepth 1 \
   -filename /dev/ram0 -ioengine libaio -direct=1 -bs=4096

 Result:

  read   1691,6 MByte/s
  write   898,9 MByte/s

 No wonder, it's extremely fast.

3. Comparison with different qemu-kvm versions

 Now I'm running benchmarks with both qemu-kvm 0.14.1 and 1.0.

 - Sequential read (Running inside guest)
   # fio -name iops -rw=read -size=1G -iodepth 1 \
    -filename /dev/vdb -ioengine libaio -direct=1 -bs=4096

 - Sequential write (Running inside guest)
   # fio -name iops -rw=write -size=1G -iodepth 1 \
    -filename /dev/vdb -ioengine libaio -direct=1 -bs=4096

 For each one, I tested 3 times to get the average.

 Result:

  seqread with qemu-kvm 0.14.1   67,0 MByte/s
  seqread with qemu-kvm 1.0      30,9 MByte/s

  seqwrite with qemu-kvm 0.14.1  65,8 MByte/s
  seqwrite with qemu-kvm 1.0     30,5 MByte/s

 So the newest stable version of qemu-kvm shows only the half of
 bandwidth compared to the older version 0.14.1.

The question is, why is it so slower?
How can we improve the performance, except for downgrading to 0.14.1?

I know there have been already several discussions on this issue,
for example, benchmark and trace on virtio-blk latency [1],
or in-kernel accelerator "vhost-blk" [2].
I'm going to continue testing with those ones, too.
But does anyone have a better idea or know about recent updates?

Regards,
Dongsu

[1] http://www.linux-kvm.org/page/Virtio/Block/Latency
[2] http://thread.gmane.org/gmane.comp.emulators.kvm.devel/76893

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux