On Tue, Sep 6, 2011 at 11:25 PM, TooMeeK <toomeek_85@xxxxx> wrote: > First, I created mirrored storage in hypervisor from one 600-gig partition > (yes, that's correct - I have only one drive currently), details: > sudo mdadm --detail /dev/md3 > /dev/md3: > Version : 1.2 > Creation Time : Thu Jul 28 20:07:00 2011 > Raid Level : raid1 > Array Size : 664187352 (633.42 GiB 680.13 GB) > Used Dev Size : 664187352 (633.42 GiB 680.13 GB) > Raid Devices : 2 > Total Devices : 1 > Persistence : Superblock is persistent > > Update Time : Thu Jul 28 22:07:10 2011 > State : clean, degraded > Active Devices : 1 > Working Devices : 1 > Failed Devices : 0 > Spare Devices : 0 > > Name : Server:3 (local to host Server) > UUID : 87184170:2d9102b1:ca16a5d7:1f23fe2e > Events : 3276 > > Number Major Minor RaidDevice State > 0 8 23 0 active sync /dev/sdb7 > 1 0 0 1 removed > > Partition type is Linux RAID autodetect and this drive can do 80MB/s write > and 100 MB/s read seq. How did you measure those figures? To double-check sequential read throughput on the host: # dd if=/dev/md3 of=/dev/null bs=64k count=16384 iflag=direct The SMB results don't help narrow down a disk I/O problem. To collect comparable sequential read throughput inside the guest: # dd if=/dev/vda of=/dev/null bs=64k count=16384 iflag=direct > QEMU PC emulator version 0.12.5 (qemu-kvm-0.12.5) Try qemu-kvm 0.15. > Next, I've tried following combinations with virt-manager 0.8.4 (from XML of > VM): > 1.on Debian VM with virtio drivers for both storage and NIC: > <disk type='block' device='disk'> cache='none' > <source dev='/dev/md3'/> > <target dev='vdb' bus='virtio'/> You can enable Linux AIO, which typically performs better than the default io="threads": <driver name="qemu" type="raw" io="native"/> Stefan -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html