On Thu, May 12, 2022 at 17:20:34 +0200, Peter Krempa wrote: > On Tue, May 10, 2022 at 17:21:30 +0200, Jiri Denemark wrote: > > Everything was already done in the normal Finish phase and vCPUs are > > running. We just need to wait for all remaining data to be transferred. > > > > Signed-off-by: Jiri Denemark <jdenemar@xxxxxxxxxx> > > --- > > src/qemu/qemu_migration.c | 46 ++++++++++++++++++++++++++++++++++----- > > 1 file changed, 40 insertions(+), 6 deletions(-) > > > > diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c > > index a8481f7515..430dfb1abb 100644 > > --- a/src/qemu/qemu_migration.c > > +++ b/src/qemu/qemu_migration.c > > @@ -6600,6 +6600,22 @@ qemuMigrationDstFinishFresh(virQEMUDriver *driver, > > } > > > > > > +static int > > +qemuMigrationDstFinishResume(virQEMUDriver *driver, > > + virDomainObj *vm) > > +{ > > + VIR_DEBUG("vm=%p", vm); > > + > > + if (qemuMigrationDstWaitForCompletion(driver, vm, > > + VIR_ASYNC_JOB_MIGRATION_IN, > > + false) < 0) { > > + return -1; > > + } > > As I mentioned in another reply, IMO it would be useful to allow > adoption of a unattended running migration precisely for this case, so > that mgmt apps don't have to encode more logic to wait for events if > migration was actually running fine. I think event processing in mgmt apps is inevitable anyway as the migration may finish before an app even tries to recover the migration in case only libvirt side of migration was broken. And it may still be broken to make recovery impossible while migration is progressing just fine (esp. if migration streams use separate network). Even plain post-copy which never fails requires event processing since the management has to explicitly switch migration to post-copy mode. Also events are often the only way to get statistics about the completed migration. That said, I think it would be nice to have this functionality anyway so I'll try to look at it as a follow up series. Jirka