René Pfeiffer wrote:
Hello! I just tested qemu-kvm-0.11.0 with the KVM module of kernel 2.6.31.1. I noticed that the I/O performance of an unattended stock Debian Lenny install dropped somehow. The test machines ran with kvm-88 and 2.6.30.x before. The difference is very noticeable (went from about 5 minutes up to 15-25 minutes). The two test machines have different CPUs (one is an Intel Core2 CPU, the other runs with an AMD Athlon 64 X2 Dual). Is this the effect of added code regarding caching/data integrity to the VirtIO block layer or somewhere else? The qemu-system-x86_64 seems to hang a lot more in heavy I/O (showing 'D' in top/htop). The command line is quite straight-forward: qemu-system-x86_64 -drive file=debian.qcow2,if=virtio,boot=on -cdrom \ /srv/isos/debian-502-i386-netinst.iso -smp 2 -boot d -m 512 -net nic \ -net user -usb
^^^^^^^^^ Care to try with something more real than user-level networking? You're using netinstall which - apparently - tries to use some networking d/loading components etc, and userlevel networking is known to be very very slow.... Also try the same with raw images. I for one does not see any noticeable speed difference with tap networking (virtio or e1000 or rtl8139) and with raw disks (either virtio or ide), on either linux or windows guests (windows without virtio so far). But granted, I didn't try user-level networking, and I don't try qcow too often (however collegue of mine who uses qcow didn't complain about speed either). /mjt
Installation was repeated multiple times, every time the test machines hat no other load. The effect is the same with a Windows XP guest running without VirtIO.
-- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html