Re: [RFC Design Doc]Speed up live migration by skipping free pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 24, 2016 at 04:05:16PM +0000, Li, Liang Z wrote:
> 
> 
> On %D, %SN wrote:
> %Q
> 
> %C
> 
> Liang
> 
> 
> > -----Original Message-----
> > From: Michael S. Tsirkin [mailto:mst@xxxxxxxxxx]
> > Sent: Thursday, March 24, 2016 11:57 PM
> > To: Li, Liang Z
> > Cc: Dr. David Alan Gilbert; Wei Yang; qemu-devel@xxxxxxxxxx;
> > kvm@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxx; pbonzini@xxxxxxxxxx;
> > rth@xxxxxxxxxxx; ehabkost@xxxxxxxxxx; amit.shah@xxxxxxxxxx;
> > quintela@xxxxxxxxxx; mohan_parthasarathy@xxxxxxx;
> > jitendra.kolhe@xxxxxxx; simhan@xxxxxxx; rkagan@xxxxxxxxxxxxx;
> > riel@xxxxxxxxxx
> > Subject: Re: [RFC Design Doc]Speed up live migration by skipping free pages
> > 
> > On Thu, Mar 24, 2016 at 03:53:25PM +0000, Li, Liang Z wrote:
> > > > > > > Not very complex, we can implement like this:
> > > > > > >
> > > > > > > 1. Set all the bits in the migration_bitmap_rcu->bmap to 1 2.
> > > > > > > Clear all the bits in ram_list.
> > > > > > > dirty_memory[DIRTY_MEMORY_MIGRATION]
> > > > > > > 3. Send the get_free_page_bitmap request 4. Start to send
> > > > > > > pages to destination and check if the free_page_bitmap is ready
> > > > > > >     if (is_ready) {
> > > > > > >           filter out the free pages from  migration_bitmap_rcu->bmap;
> > > > > > >           migration_bitmap_sync();
> > > > > > >     }
> > > > > > >      continue until live migration complete.
> > > > > > >
> > > > > > >
> > > > > > > Is that right?
> > > > > >
> > > > > > The order I'm trying to understand is something like:
> > > > > >
> > > > > >     a) Send the get_free_page_bitmap request
> > > > > >     b) Start sending pages
> > > > > >     c) Reach the end of memory
> > > > > >       [ is_ready is false - guest hasn't made free map yet ]
> > > > > >     d) normal migration_bitmap_sync() at end of first pass
> > > > > >     e) Carry on sending dirty pages
> > > > > >     f) is_ready is true
> > > > > >       f.1) filter out free pages?
> > > > > >       f.2) migration_bitmap_sync()
> > > > > >
> > > > > > It's f.1 I'm worried about.  If the guest started generating the
> > > > > > free bitmap before (d), then a page marked as 'free' in f.1
> > > > > > might have become dirty before (d) and so (f.2) doesn't set the
> > > > > > dirty again, and so we can't filter out pages in f.1.
> > > > > >
> > > > >
> > > > > As you described, the order is incorrect.
> > > > >
> > > > > Liang
> > > >
> > > >
> > > > So to make it safe, what is required is to make sure no free list us
> > > > outstanding before calling migration_bitmap_sync.
> > > >
> > > > If one is outstanding, filter out pages before calling
> > migration_bitmap_sync.
> > > >
> > > > Of course, if we just do it like we normally do with migration, then
> > > > by the time we call migration_bitmap_sync dirty bitmap is completely
> > > > empty, so there won't be anything to filter out.
> > > >
> > > > One way to address this is call migration_bitmap_sync in the IO
> > > > handler, while VCPU is stopped, then make sure to filter out pages
> > > > before the next migration_bitmap_sync.
> > > >
> > > > Another is to start filtering out pages upon IO handler, but make
> > > > sure to flush the queue before calling migration_bitmap_sync.
> > > >
> > >
> > > It's really complex, maybe we should switch to a simple start,  just
> > > skip the free page in the ram bulk stage and make it asynchronous?
> > >
> > > Liang
> > 
> > You mean like your patches do? No, blocking bulk migration until guest
> > response is basically a non-starter.
> > 
> 
> No, don't wait anymore. Like below (copy from previous thread)
> --------------------------------------------------------------
> 1. Set all the bits in the migration_bitmap_rcu->bmap to 1 
> 2. Clear all the bits in ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]
> 3. Send the get_free_page_bitmap request 
> 4. Start to send  pages to destination and check if the free_page_bitmap is ready
>    if (is_ready) {
>      filter out the free pages from  migration_bitmap_rcu->bmap;
>      migration_bitmap_sync();
>  }
> continue until live migration complete.
> ---------------------------------------------------------------
> Can this work?
> 
> Liang

Not if you get the ready bit asynchronously like you wrote here
since is_ready can get set while you called migration_bitmap_sync.

As I said previously,
to make this work you need to filter out synchronously while VCPU is
stopped and while free pages from list are not being used.

Alternatively prevent getting free page list
and filtering them out from
guest from racing with migration_bitmap_sync.

For example, flush the VQ after migration_bitmap_sync.
So:

    lock
    migration_bitmap_sync();
    while (elem = virtqueue_pop) {
        virtqueue_push(elem)
        g_free(elem)
    }
    unlock


while in handle_output

    lock
    while (elem = virtqueue_pop) {
        list = get_free_list(elem)
        filter_out_free(list)
        virtqueue_push(elem)
        free(elem)
    }
    unlock


lock prevents migration_bitmap_sync from racing
against  handle_output


This way you can actually use ioeventfd
for this VQ so VCPU won't be blocked.

I do not think this is so complex, and
this way you can add requests for guest
free bitmap at an arbitary interval
either in host or in guest.

For example, add a value that says how often
should guest update the bitmap, set it to 0
to disable updates after migration done.

Or, make guest resubmit a new one when we consume
the old one, run handle_output about through
a periodic timer on host.


> > --
> > MST
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux