On Thu, Mar 8, 2012 at 11:56 PM, Ross Becker <ross.becker@xxxxxxxxx> wrote: > I just joined in order to chime in here- > > I'm seeing the exact same thing as Reeted; I've got a machine with a > storage subsystem capable of 400k IOPs, and when I punch the storage up to > VMs, each VM seems to top out at around 15-20k IOPs. I've managed to get > to 115k IOPs by creating 8 VMs, doing appropriate CPU pinning to spread > them amongst physical cores, and running IO in them simultaneously, but > I'm unable to get a single VM past 20k IOPs. > > I'm using kvm-qemu 12.1.2, as distributed in RHEL 6.2. > > The hardware is a Dell R910 chassis, with 4 intel E7 processors. I am > poking LVM logical volume block devices directly up to VMs as disks, > format raw, virtio driver, write caching none, IO mode native. Each VM > has 4 vCPUs. > > I'm also using fio to do my testing. > > The interesting thing is that throughput is actually pretty fantastic; I'm > able to push 6.3 GB/sec using 256k blocks, but the IOPs @ 4k block size > are poor. There is a stalled effort to improve the virtio-blk guest driver IOPS performance. You might be interested in testing these patches ("virtio-blk: Change I/O path from request to BIO"): https://lkml.org/lkml/2011/12/20/419 No one has deeply explored and benchmarked to a point where it's clear that these patches are the way forward. What the patches do is change the guest driver to reduce lock contention and actually skip the guest I/O scheduler in favor of a more lightweight code-path in the guest kernel. This should be good for your 400k IOPS with 4 vcpus setup. Stefan -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html