Re: [PATCH 0/9] kvm tools, qcow: Improve QCOW performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jul 10, 2011 at 9:17 PM, Ingo Molnar <mingo@xxxxxxx> wrote:
>
> * Pekka Enberg <penberg@xxxxxxxxxx> wrote:
>
>> Hi Ingo,
>>
>> * Pekka Enberg <penberg@xxxxxxxxxx> wrote:
>> >> This series fixes QCOW locking issues and implements delayed metadata writeout.
>> >> This improves performance of writeout to QCOW2 images that don't have clusters
>> >> and L2 tables allocated on-disk.
>> >>
>> >> I tested the series by running
>> >>
>> >>   mount -t ext4 /dev/vdb /mnt
>> >>   dd if=/dev/zero of=/mnt/tmp
>> >>
>> >> in the guest multiple times for fresly generated QCOW2 image:
>> >>
>> >>   dd if=/dev/zero of=fs.ext4 bs=1024k count=512 && mkfs.ext4 -F fs.ext4 && qemu-img convert -O qcow2 fs.ext4 fs.qcow2
>> >>
>> >> which causes worst-case behavior for the current code.
>> >>
>> >> Before:
>> >>
>> >>   [ seekwatcher: http://userweb.kernel.org/~penberg/kvm-qcow-delayed/kvm-qcow2-master.png ]
>> >>
>> >>   511229952 bytes (511 MB) copied, 19.906 s, 25.7 MB/s
>> >>   511229952 bytes (511 MB) copied, 20.3168 s, 25.2 MB/s
>> >>   511229952 bytes (511 MB) copied, 20.8078 s, 24.6 MB/s
>> >>   511229952 bytes (511 MB) copied, 21.0889 s, 24.2 MB/s
>> >>   511229952 bytes (511 MB) copied, 20.7833 s, 24.6 MB/s
>> >>   511229952 bytes (511 MB) copied, 20.7536 s, 24.6 MB/s
>> >>   511229952 bytes (511 MB) copied, 20.0312 s, 25.5 MB/s
>> >>
>> >> After:
>> >>
>> >>   [ seekwatcher: http://userweb.kernel.org/~penberg/kvm-qcow-delayed/kvm-qcow2-delayed.png ]
>> >>
>> >>   511229952 bytes (511 MB) copied, 7.68312 s, 66.5 MB/s
>> >>   511229952 bytes (511 MB) copied, 7.54065 s, 67.8 MB/s
>> >>   511229952 bytes (511 MB) copied, 9.34749 s, 54.7 MB/s
>> >>   511229952 bytes (511 MB) copied, 9.2421 s, 55.3 MB/s
>> >>   511229952 bytes (511 MB) copied, 9.9364 s, 51.5 MB/s
>> >>   511229952 bytes (511 MB) copied, 10.0337 s, 51.0 MB/s
>> >>   511229952 bytes (511 MB) copied, 9.39502 s, 54.4 MB/s
>>
>> On Sun, Jul 10, 2011 at 8:15 PM, Ingo Molnar <mingo@xxxxxxx> wrote:
>> > Just wondering, how does Qemu perform on the same system using the
>> > same image, with comparable settings?
>>
>> Freshly built from qemu-kvm.git:
>>
>> $ /home/penberg/qemu-kvm/x86_64-softmmu/qemu-system-x86_64 --version
>> QEMU emulator version 0.14.50 (qemu-kvm-devel), Copyright (c)
>> 2003-2008 Fabrice Bellard
>>
>> Tests were run with this configuration:
>>
>> $ /home/penberg/qemu-kvm/x86_64-softmmu/qemu-system-x86_64 -kernel
>> /boot/vmlinuz-3.0.0-rc5+ -drive
>> file=/home/penberg/images/debian_squeeze_amd64_standard.img,if=virtio,boot=on
>> -drive file=fs.qcow2,if=virtio -nographic -m 320 -smp 2 -append
>> "root=/dev/vda1 console=ttyS0 init=/root/iobench-write"
>>
>> Not sure if that's 100% comparable settings but anyway. The results
>> looks as follows:
>>
>>   511229952 bytes (511 MB) copied, 12.5543 s, 40.7 MB/s
>>   511229952 bytes (511 MB) copied, 9.50382 s, 53.8 MB/s
>>   511229952 bytes (511 MB) copied, 12.1092 s, 42.2 MB/s
>>   511229952 bytes (511 MB) copied, 13.2981 s, 38.4 MB/s
>>   511229952 bytes (511 MB) copied, 11.3314 s, 45.1 MB/s
>>   511229952 bytes (511 MB) copied, 12.7505 s, 40.1 MB/s
>>   511229952 bytes (511 MB) copied, 11.2921 s, 45.3 MB/s
>>
>> This is what I'd expect as tools/kvm has much more relaxed sync()
>> guarantees than qemu-kvm. We treat all writes to QCOW2 images as
>> volatile until VIRTIO_BLK_T_FLUSH is issued. Furthemore, for this
>> particular (special case) load, it's pretty much append-only to the
>> backing file which is why QCOW is so close to raw image performance
>> here.
>
> Pretty impressive numbers!
>
> To relax Qemu's caching guarantees you can append ,cache=writeback to
> your -drive option, i.e. something like:
>
>  -drive file=/dev/shm/test.qcow2,cache=writeback,if=virtio
>
> Does that improve the Qemu results?

Yes, it seems so:

  511229952 bytes (511 MB) copied, 10.0879 s, 50.7 MB/s
  511229952 bytes (511 MB) copied, 4.92686 s, 104 MB/s
  511229952 bytes (511 MB) copied, 13.1955 s, 38.7 MB/s
  511229952 bytes (511 MB) copied, 10.7322 s, 47.6 MB/s
  511229952 bytes (511 MB) copied, 9.46115 s, 54.0 MB/s
  511229952 bytes (511 MB) copied, 14.9963 s, 34.1 MB/s
  511229952 bytes (511 MB) copied, 11.1701 s, 45.8 MB/s

The numbers seem much more unstable from run to run with 'writeback'
so it's pretty difficult to say how much it helps. I'm doing
'drop_caches' after image creation so I don't quite understand why
they are so unstable.

                                Pekka
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux