Re: virtio disk slower than IDE?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



john cooper wrote:

The test is building the Linux kernel (only taking the second run to give the test the benefit of local cache):

make clean; make -j8 all; make clean; sync; time make -j8 all

This takes about 10 minutes with IDE disk emulation and about 13 minutes with virtio. I ran the tests multiple time with most non-essential services on the host switched off (including cron/atd), and the guest in single-user mode to reduce the "noise" in the test to the minimum, and the results are pretty consistent, with virtio being about 30% behind.

I'd expect for an observed 30% wall clock time difference
of an operation as complex as a kernel build the base i/o
throughput disparity is substantially greater.  Did you
try a more simple/regular load, eg: a streaming dd read
of various block sizes from guest raw disk devices?
This is also considerably easier to debug vs. the complex
i/o load generated by a build.

I'm not convinced it's the read performance, since it's the second pass that is time, by which time all the source files will be in the guest's cache. I verified this by doing just one pass and priming it with:

find . -type f -exec cat '{}' > /dev/null \;

The execution times are indistinguishable from the second pass in the two-pass test.

To me that would indicate the the problem is with write performance, rather than read performance.

One way to chop up the problem space is using blktrace
on the host to observe both the i/o patterns coming out
of qemu and the host's response to them in terms of
turn around time.  I expect you'll see somewhat different
nature requests generated by qemu w/r/t blocking and
number of threads serving virtio_blk requests relative
to ide but the host response should be essentially the
same in terms of data returned per unit time.

If the host looks to be turning around i/o request with
similar latency in both cases, the problem would be lower
frequency of requests generated by qemu in the case of
virtio_blk.   Here it would be useful to know the host
load generated by the guest for both cases.

With virtio the CPU usage did seem to be noticeably lower. I figured that was because it was spending more time waiting for I/O to finish, since it was clearly bottlenecking on disk I/O (since that's the only thing that changed).

I'll try iozone's write tests and see how that compares. If I'm right about write performance being problematic, iozone might show the same performance deterioration on write tests compared to the IDE emulation.

Gordan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux