On Fri, Jun 08, 2012 at 09:30:19PM +0100, Brian Candler wrote: > ubuntu at lucidtest:~$ dd if=/dev/zero of=/var/tmp/test.zeros2 bs=1024k count=100 > 100+0 records in > 100+0 records out > 104857600 bytes (105 MB) copied, 14.5182 s, 7.2 MB/s > > And this is after live-migrating the VM to dev-storage2: > > ubuntu at lucidtest:~$ dd if=/dev/zero of=/var/tmp/test.zeros3 bs=1024k count=100 > 100+0 records in > 100+0 records out > 104857600 bytes (105 MB) copied, 4.17285 s, 25.1 MB/s I did some more timings after converting the qcow2 image to a raw file. Note that you have to be careful: qemu-img convert -O raw will give you a sparse file, not actually allocating space on disk. So I had to flatten it with dd (which incidentally showed a reasonable write throughput of ~350MB/sec to the 12-disk RAID10 array, and was the same writing locally or writing to a single-brick gluster volume) Tests: 1. VM using a single-brick gluster volume as backend. The brick is on the same node as KVM is running. (Actually the second cluster node was powered off for all these tests) ubuntu at lucidtest:~$ dd if=/dev/zero of=/var/tmp/test.zeros4 bs=1024k count=500 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 55.9581 s, 9.4 MB/s (Strangely this is lower than the 25MB/s I got before) 2. VM image stored directly on the RAID10 array - no gluster. ubuntu at lucidtest:~$ dd if=/dev/zero of=/var/tmp/test.zeros4 bs=1024k count=500 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 10.6027 s, 49.4 MB/s 3. Same VM instance after test 2, but this time with option cache='none' (which doesn't work with glusterfs) ubuntu at lucidtest:~$ dd if=/dev/zero of=/var/tmp/test.zeros5 bs=1024k count=500 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 2.29959 s, 228 MB/s That's more like it :-) So clearly cache='none' (O_DIRECT) makes a big difference when using a local filesystem, so I'd very much like to be able to test it with gluster. I'd also very much look forward to having libglusterfs integrated directly into KVM, which I believe is on the cards at some point: http://www.mail-archive.com/users at ovirt.org/msg01812.html Regards, Brian. P.S. for those who haven't seen it yet, there's a very nice Red Hat presentation on KVM performance tuning here. http://www.linux-kvm.org/wiki/images/5/59/Kvm-forum-2011-performance-improvements-optimizations-D.pdf