Re: [PATCH 09/10] Exit loop if we have been there too long

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Avi Kivity <avi@xxxxxxxxxx> wrote:
> On 11/30/2010 04:17 PM, Anthony Liguori wrote:
>>> What's the problem with burning that cpu?  per guest page,
>>> compressing takes less than sending.  Is it just an issue of qemu
>>> mutex hold time?
>>
>>
>> If you have a 512GB guest, then you have a 16MB dirty bitmap which
>> ends up being an 128MB dirty bitmap in QEMU because we represent
>> dirty bits with 8 bits.
>
> Was there not a patchset to split each bit into its own bitmap?  And
> then copy the kvm or qemu master bitmap into each client bitmap as it
> became needed?
>
>> Walking 16mb (or 128mb) of memory just fine find a few pages to send
>> over the wire is a big waste of CPU time.  If kvm.ko used a
>> multi-level table to represent dirty info, we could walk the memory
>> mapping at 2MB chunks allowing us to skip a large amount of the
>> comparisons.
>
> There's no reason to assume dirty pages would be clustered.  If 0.2%
> of memory were dirty, but scattered uniformly, there would be no win
> from the two-level bitmap.  A loss, in fact: 2MB can be represented as
> 512 bits or 64 bytes, just one cache line.  Any two-level thing will
> need more.
>
> We might have a more compact encoding for sparse bitmaps, like
> run-length encoding.


I haven't measured it, but I think that it would be much better that
way.  When we start, it don't matter too much (everything is dirty),
what we should optimize for is the last rounds, and in the last rounds
it would be much better to ask kvm:

fill this array of dirty pages offsets, and be done with it.
Not sure if adding a size field would improve things, both tests need to
be measured.

What would be a winner independent of that is a way to ask qemu the
number of dirty pages.  Just now we need to calculate them walking the
bitmap (one of my patches just simplifies this).

Adding the feature to qemu means that we could always give recent
information to "info migrate" without incurring in a big cost.

>> BTW, we should also refactor qemu to use the kvm dirty bitmap
>> directly instead of mapping it to the main dirty bitmap.
>
> That's what the patch set I was alluding to did.  Or maybe I imagined
> the whole thing.

it existed.  And today would be easier because KQEMU and VGA are not
needed  anymore.

>>>> We also need to implement live migration in a separate thread that
>>>> doesn't carry qemu_mutex while it runs.
>>>
>>> IMO that's the biggest hit currently.
>>
>> Yup.  That's the Correct solution to the problem.
>
> Then let's just Do it.

Will take a look at splittingthe qemu_mutex bit.

Later, Juan.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux