[ kvm-Bugs-2926083 ] qcow2 default cache slows 0.12.1.2 vs kvm-88

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Bugs item #2926083, was opened at 2010-01-05 08:16
Message generated for change (Comment added) made by bakaproject
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2926083&group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: qemu
Group: None
Status: Closed
Resolution: Wont Fix
Priority: 5
Private: No
Submitted By: Baka Project (bakaproject)
Assigned to: Nobody/Anonymous (nobody)
Summary: qcow2 default cache slows 0.12.1.2 vs kvm-88

Initial Comment:
An install of a linux guest closely resembling RHEL5.4 x86_64 (Linux rats1 2.6.18-164.6.1.el5 #1 SMP Tue Oct 27 11:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux) is extremely slow in 0.12.1.2 compared to kvm-88.  Some extensive git bisecting revealed the problem being that the default cache strategy for qcow2 was changed from writeback to writethrough.  When I run qemu in writeback mode, the performance is restored.

I understand the performance/correctness tradeoff here--after wasting a day tracking it down.   I guess this is a plea to restore the previous behavior or better advertise the change or potential solutions to the problem.

qemu command (without explicit cache): aoss x86_64-softmmu/qemu-system-x86_64 -enable-kvm -name rats2 -hda /VMs/images/rats2.cow -m 650 -monitor telnet:localhost:9008,server,nowait,nodelay -vnc :8 -cdrom /VMs/cds/curodw.iso -boot dc  -net nic,model=rtl8139,macaddr=2e:2e:2e:0:0:db,vlan=0 -net tap,ifname=rats2_0,vlan=0,script=no,downscript=no -net nic,model=rtl8139,macaddr=2e:2e:2e:0:1:db,vlan=1 -net tap,ifname=rats2_1,vlan=1,script=no,downscript=no -net nic,model=rtl8139,macaddr=2e:2e:2e:0:2:db,vlan=2 -net tap,ifname=rats2_2,vlan=2,script=no,downscript=no -serial telnet:localhost:9108,server,nowait,nodelay -serial telnet:localhost:9208,server,nowait,nodelay -daemonize -pidfile /VMs/var/pids/rats2 -usbdevice tablet

Command with explicit cache: aoss x86_64-softmmu/qemu-system-x86_64 -enable-kvm -name rats2 -drive file=/VMs/images/rats2.cow,if=ide,index=0,cache=writeback -m 650 -monitor telnet:localhost:9008,server,nowait,nodelay -vnc :8 -cdrom /VMs/cds/curodw.iso -boot dc  -net nic,model=rtl8139,macaddr=2e:2e:2e:0:0:db,vlan=0 -net tap,ifname=rats2_0,vlan=0,script=no,downscript=no -net nic,model=rtl8139,macaddr=2e:2e:2e:0:1:db,vlan=1 -net tap,ifname=rats2_1,vlan=1,script=no,downscript=no -net nic,model=rtl8139,macaddr=2e:2e:2e:0:2:db,vlan=2 -net tap,ifname=rats2_2,vlan=2,script=no,downscript=no -serial telnet:localhost:9108,server,nowait,nodelay -serial telnet:localhost:9208,server,nowait,nodelay -daemonize -pidfile /VMs/var/pids/rats2 -usbdevice tablet


Kernel modules: 2.6.32.2ish (gentoo-sources 2.6.32-gentoo-r1)
configure: --audio-drv-list=alsa --kerneldir=/lib/modules/2.6.32-gentoo-r1/build
CPU: Intel(R) Core(TM)2 Extreme CPU X9650  @ 3.00GHz (family 6, model 23, stepping 6, 6MB cache, 4 cores)


----------------------------------------------------------------------

Comment By: Baka Project (bakaproject)
Date: 2010-01-05 15:30

Message:
There is no reason I cannot use cache=writeback, I was
just...surprised...but such a drastic performance change without any
obvious signs that a default had changed or google search hits (if I had
known to search for cache information I would have found it, but all I knew
was that performance had degraded).

I'm not sure how much the 64K cluster size helps, but it is not putting
the performance within striking distance of writeback performance.  In my
install test (using preallocation=metadata for all tests):

cluster | cache             | seconds
64k      | writeback       | 90
64k      | writethrough | 265
4k         | writeback       | 90
4k         | writethrough | 300

11% performance gain (in my very unscientific measurement) for the larger
cluster size is nothing to be sneezed at, to be sure, but it really doesn't
compensate for the 200% performance penalty for the cache change.

I can understand why you want the default to be safe.  I'm not exactly
sure how you would better advertise the apparent ide (and presumably scsi
and other types) drive performance "bug"/slowdown being caused by the cache
change, but it would be nice.  Perhaps this ticket will help if it ever
gets indexed.


----------------------------------------------------------------------

Comment By: Avi Kivity (avik)
Date: 2010-01-05 08:58

Message:
The default must of course be safe.  Is there some reason why you can't use
cache=writeback if you don't care about data integrity?

Note performance with old images (created with 4K cluster size) will be
much lower than new images (created with 64K cluster size).  Check your
image cluster size.

----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2926083&group_id=180599
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux