KVM - IO disk performance - strange behaviour - better performance in guest

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I'm currently testing some distribution with uptodate/new KVM :
- Ubuntu Karmic 9.10
- Rhel 5.4

I'm using IoZone to test IO disk performance.
For my test I have dedicated a LVM partition to the IoZone benchmark :
iozone -a -U /mnt/bench/  -f /mnt/bench/test-file -R -b <file.xls>

I have done multiple tests with 'cache=none', cache='writeback' and 
cache='writethrough'.
Inside the VM I have tried, for the filesystem, the options  data=ordered and 
data=writeback.

All my tests have different results but at last, something is strange : in a 
VM, IO Disk performance are better than in the host system for little files and 
blocks and worse for big files and blocks.

I can give all my iozone benchmark results if you'd like ;).

Well I do some assumption :
- KVM provide a cache for read/write operations
- KVM tells to the guest that data are written, but in fact, they are not.

Can someone explain me this behaviour ? Can I control it ?
Can this lead to data corruption in case of hardware crash ?


More informations
==============================
Ubuntu 9.10
-----------------------------------
- host : Ubuntu 9.10 ; kernel 2.6.31 ; qemu-kvm 0.11 ; lvm dedicated partition 
in ext3 (data=ordered or writeback).

- guest : Ubuntu 9.10 kernel 2.6.31 ; qemu-kvm 0.11 ; virtio disk; lvm 
dedicated partition in ext3 (data=ordered or writeback).

- The launch line : 
/usr/bin/kvm  -M pc-0.11 -m 512 -smp 1 -name kvm-ubuntu910 -uuid e5a362c5-
c28a-93dd-043b-d46eb4daba37 -monitor unix:/var/run/libvirt/qemu/kvm-
ubuntu910.monitor,server,nowait -boot c -drive file=/dev/storage-local-vol2-
lvm/kvm-ubuntu910,if=virtio,index=0,cache=<cache_option>,boot=on -drive 
file=/dev/storage-local-vol2-lvm/bench,if=virtio,cache=<cache_option>,index=1 -
k fr

I have done tests with cache='none', cache='writeback' and 
cache='writethrough'. Everytime, the results was différents but in the same way 
:   in the VM, IO Disk performance are better for little files and blocks and 
worst for big files and blocks.


RHEL5.4
==============================
Similar tests  ... Similar behaviour ...
Host : RHEL5.4, ext3, KVM from Redhat; lvm dedicated partitions; driver disk 
from Redhat
Guest : RHEL5.4, ext3, KVM from Redhat; lvm dedicated partitions; driver disk 
from Redhat






--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux