On Wed, Jul 20, 2011 at 12:00:44AM -0400, Umesh Deshpande wrote: > This patch creates a separate thread for the guest migration on the source side. The migration routine is called from the migration clock. > > Signed-off-by: Umesh Deshpande <udeshpan@xxxxxxxxxx> > --- > arch_init.c | 8 +++++++ > buffered_file.c | 10 ++++----- > migration-tcp.c | 18 ++++++++--------- > migration-unix.c | 7 ++---- > migration.c | 56 +++++++++++++++++++++++++++++-------------------------- > migration.h | 4 +-- > 6 files changed, 57 insertions(+), 46 deletions(-) > > diff --git a/arch_init.c b/arch_init.c > index f81a729..6d44b72 100644 > --- a/arch_init.c > +++ b/arch_init.c > @@ -260,6 +260,10 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque) > return 0; > } > > + if (stage != 3) { > + qemu_mutex_lock_iothread(); > + } > + > if (cpu_physical_sync_dirty_bitmap(0, TARGET_PHYS_ADDR_MAX) != 0) { > qemu_file_set_error(f); > return 0; > @@ -267,6 +271,10 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque) > > sync_migration_bitmap(0, TARGET_PHYS_ADDR_MAX); > > + if (stage != 3) { > + qemu_mutex_unlock_iothread(); > + } > + Many data structures shared by vcpus/iothread and migration thread are accessed simultaneously without protection. Instead of simply moving the entire migration routines to a thread, i'd suggest moving only the time consuming work in ram_save_block (dup_page and put_buffer), after properly audit for shared access. And send more than one page a time, of course. A separate lock for ram_list is probably necessary, so that it can be accessed from the migration thread. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html