Re: KVM call agenda for Nov 23

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 22.11.2010 14:55, schrieb Stefan Hajnoczi:
> On Mon, Nov 22, 2010 at 1:38 PM, Juan Quintela <quintela@xxxxxxxxxx> wrote:
>>
>> Please send in any agenda items you are interested in covering.
> 
> QCOW2 performance roadmap:
> * What can be done to achieve near-raw image format performance?
> * Benchmark results from an ideal QCOW2 model.

Some thoughts on qcow2 performance:

== Fully allocated image ==
Should be able to perform similar to raw because there is very little
handling of metadata. Additional I/O only if an L2 table must be read
from the disk.

* Should we increase the L2 table cache size to make it happen less
often? (Currently 16 * 512 MB, QED uses more)

Known problems:
* Synchronous read of L2 tables; should be made async
** General thought on making things async: Coroutines? What happened to
that proposal?
* We may want to have online defragmentation eventually

== Growing stand-alone image ==
Stand-alone images (i.e. images without a backing file) aren't that
interesting because you would use raw for them anyway if you needed
optimal performance. We need to be "good enough" here.

However, all of the problems that arise from dealing with metadata apply
for the really interesting third case, so optimizing them is an
important step on the way.

Known problems:
* Needs a bdrv_flush between refcount table and L2 table write
* Synchronous metadata updates
* Both to be solved by block-queue
** Batches writes and makes the async, can greatly reduce number of
bdrv_flush calls
** Except for cache=writethrough, but this is secondary
** Should we make cache=off the default caching mode in qemu?
writethrough seems to be a bit too much anyway irrespective of the image
format.
* Synchronous refcount table reads
** How frequent are cache misses?
** Making this one async is much harder than L2 table reads. We can make
it a goal for mid-term, but short term we should make it hurt less if
it's a problem in practice.
*** It's probably not, because (without internal snapshots or
compression) we never free clusters, so we fill it sequentially and only
load a new one when the old one is full - and that one we don't even
read, but write, so block-queue will help
* Things like refcount table growth are completely synchronous.
** Not a real problem, because it happens approximately never.

== Growing image with backing file ==
This is the really interesting scenario where you need an image format
that provides some features. For qcow2, it's mostly the same as above.

See stand-alone, plus:
* Needs an bdrv_flush between COW and writing to the L2 table
** qcow2 has already one after refcount table write, so no additional
overhead
* Synchronous COW
** Should be fairly easy to make async
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux