Re: [RFC 3/4] A separate thread for the VM migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




----- Original Message -----
From: "Marcelo Tosatti" <mtosatti@xxxxxxxxxx>
To: "Umesh Deshpande" <udeshpan@xxxxxxxxxx>
Cc: kvm@xxxxxxxxxxxxxxx, qemu-devel@xxxxxxxxxx
Sent: Wednesday, July 20, 2011 3:02:46 PM
Subject: Re: [RFC 3/4] A separate thread for the VM migration

On Wed, Jul 20, 2011 at 12:00:44AM -0400, Umesh Deshpande wrote:
> This patch creates a separate thread for the guest migration on the source side. The migration routine is called from the migration clock.
> 
> Signed-off-by: Umesh Deshpande <udeshpan@xxxxxxxxxx>
> ---
>  arch_init.c      |    8 +++++++
>  buffered_file.c  |   10 ++++-----
>  migration-tcp.c  |   18 ++++++++---------
>  migration-unix.c |    7 ++----
>  migration.c      |   56 +++++++++++++++++++++++++++++--------------------------
>  migration.h      |    4 +--
>  6 files changed, 57 insertions(+), 46 deletions(-)
> 
> diff --git a/arch_init.c b/arch_init.c
> index f81a729..6d44b72 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -260,6 +260,10 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
>          return 0;
>      }
>  
> +    if (stage != 3) {
> +        qemu_mutex_lock_iothread();
> +    }
> +
>      if (cpu_physical_sync_dirty_bitmap(0, TARGET_PHYS_ADDR_MAX) != 0) {
>          qemu_file_set_error(f);
>          return 0;
> @@ -267,6 +271,10 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
>  
>      sync_migration_bitmap(0, TARGET_PHYS_ADDR_MAX);
>  
> +    if (stage != 3) {
> +        qemu_mutex_unlock_iothread();
> +    }
> +

Many data structures shared by vcpus/iothread and migration thread are
accessed simultaneously without protection. Instead of simply moving
the entire migration routines to a thread, i'd suggest moving only the
time consuming work in ram_save_block (dup_page and put_buffer), after
properly audit for shared access. And send more than one page a time, of
course.

The group of migration routines moved into the thread needs to be executed sequentially, because of the way protocol is designed.
Currently, migration is performed in sections, and we cannot proceed to the next section
until current section has been written to the QEMUFile. A thread for any sub-part would introduce parallelism, breaking the sequential semantics.
(Condition variables will have to be used to ensure sequentiality across new thread and iothread)

Secondly, put_buffer is called from iohandler and timers, currently both are called from iothread.
With a separate thread for dup_page and put_buffer, it will also be called from inside the thread.

Another option with the current implementation could be to hold the qemu_mutex inside the thread for most of the part and releasing it for time consuming part in ram_save_block.

A separate lock for ram_list is probably necessary, so that it can
be accessed from the migration thread.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux