On 27/04/09 at 19:40 -0400, john cooper wrote: > Lucas Nussbaum wrote: >> On 27/04/09 at 13:36 -0400, john cooper wrote: >>> Lucas Nussbaum wrote: >> >> non-virtio: >> kvm -drive file=/tmp/debian-amd64.img,if=scsi,cache=writethrough -net >> nic,macaddr=00:16:3e:5a:28:1,model=e1000 -net tap -nographic -kernel >> /boot/vmlinuz-2.6.29 -initrd /boot/initrd.img-2.6.29 -append >> root=/dev/sda1 ro console=tty0 console=ttyS0,9600,8n1 >> >> virtio: >> kvm -drive file=/tmp/debian-amd64.img,if=virtio,cache=writethrough -net >> nic,macaddr=00:16:3e:5a:28:1,model=e1000 -net tap -nographic -kernel >> /boot/vmlinuz-2.6.29 -initrd /boot/initrd.img-2.6.29 -append >> root=/dev/vda1 ro console=tty0 console=ttyS0,9600,8n1 >> > One suggestion would be to use a separate drive > for the virtio vs. non-virtio comparison to avoid > a Heisenberg effect. I don't have another drive available, but tried to output the trace over the network. Results were the same. >> So, apparently, with virtio, there's a lot more data being written to >> disk. The underlying filesystem is ext3, and is monted as /tmp. It only >> contains the VM image file. Another difference is that, with virtio, the >> data was shared equally over all 4 CPUs, with without virt-io, CPU0 and >> CPU1 did all the work. >> In the virtio log, I also see a (null) process doing a lot of writes. > Can't say what is causing that -- only took a look > at the short logs. However the isolation suggested > above may help factor that out if you need to > pursue this path. >> >> I uploaded the logs to http://blop.info/bazaar/virtio/, if you want to >> take a look. > In the virtio case i/o is being issued from multiple > threads. You could be hitting the cfq close-cooperator > bug we saw as late as 2.6.28. > > A quick test to rule this in/out would be to change > the block scheduler to other than cfq for the host > device where the backing image resides -- in your > case the host device containing /tmp/debian-amd64.img. > > Eg for /dev/sda1: > > # cat /sys/block/sda/queue/scheduler > noop anticipatory deadline [cfq] > # echo deadline > /sys/block/sda/queue/scheduler > # cat /sys/block/sda/queue/scheduler > noop anticipatory [deadline] cfq Tried that (also with noop and anticipatory), but didn't result in any improvement. I then upgraded to kvm-85 (both the host kernel modules and the userspace), and re-ran the tests. Performance is better (about 85 MB/s), but still very far from the non-virtio case. Any other suggestions ? -- | Lucas Nussbaum | lucas@xxxxxxxxxxxxxxxxxx http://www.lucas-nussbaum.net/ | | jabber: lucas@xxxxxxxxxxx GPG: 1024D/023B3F4F | -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html