Re: [PATCH 12/21] qemu: migration: src: add NBD unixSock to iothread

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 11/18/2015 01:13 PM, Pavel Boldin wrote:
> Pass UNIX socket used as a local NBD server destination to the
> migration iothread.
> 
> Signed-off-by: Pavel Boldin <pboldin@xxxxxxxxxxxx>
> ---
>  src/qemu/qemu_migration.c | 46 ++++++++++++++++++++++++++++++++++++++--------
>  1 file changed, 38 insertions(+), 8 deletions(-)
> 
> diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
> index 61e78c5..0682fd8 100644
> --- a/src/qemu/qemu_migration.c
> +++ b/src/qemu/qemu_migration.c
> @@ -3993,6 +3993,7 @@ struct _qemuMigrationIOThread {
>      virThread thread;
>      virStreamPtr qemuStream;
>      int qemuSock;
> +    int unixSock;
>      virError err;
>      int wakeupRecvFD;
>      int wakeupSendFD;
> @@ -4003,7 +4004,7 @@ qemuMigrationIOFunc(void *arg)
>  {
>      qemuMigrationIOThreadPtr data = arg;
>      char *buffer = NULL;
> -    struct pollfd fds[2];
> +    struct pollfd fds[3];
>      int timeout = -1;
>      virErrorPtr err = NULL;
>  
> @@ -4013,8 +4014,8 @@ qemuMigrationIOFunc(void *arg)
>          goto abrt;
>  
>      fds[0].fd = data->wakeupRecvFD;
> -    fds[1].fd = -1;
> -    fds[0].events = fds[1].events = POLLIN;
> +    fds[1].fd = fds[2].fd = -1;
> +    fds[0].events = fds[1].events = fds[2].events = POLLIN;
>  
>      for (;;) {

You'll need the fds[2].revents = 0 here too.  Or are we not polling on
this one... It seems we're not since there's no fd2[2].revents code
below (yet?).

>          int ret;
> @@ -4057,7 +4058,9 @@ qemuMigrationIOFunc(void *arg)
>                      break;
>                  case 'u':
>                      fds[1].fd = data->qemuSock;
> -                    VIR_DEBUG("qemuSock set %d", data->qemuSock);
> +                    fds[2].fd = data->unixSock;
> +                    VIR_DEBUG("qemuSock set %d, unixSock set %d",
> +                              data->qemuSock, data->unixSock);

Two for one specials...  I do have some "concern" over the
synchronization between the setting/closing of these sockets... Perhaps
more towards the "real" ones are managed via spec.* and these are in io*
and concern over making sure the io* is removed first before the spec*
is closed. Don't think it's an issue, but the more options added the
more things to consider.

>                      break;
>              }
>          }
> @@ -4126,7 +4129,7 @@ qemuMigrationStartTunnel(virStreamPtr qemuStream)
>          goto error;
>  
>      io->qemuStream = qemuStream;
> -    io->qemuSock = -1;
> +    io->qemuSock = io->unixSock = -1;
>      io->wakeupRecvFD = wakeupFD[0];
>      io->wakeupSendFD = wakeupFD[1];
>  
> @@ -4202,6 +4205,26 @@ qemuMigrationSetQEMUSocket(qemuMigrationIOThreadPtr io, int sock)
>  }
>  
>  static int
> +qemuMigrationSetUnixSocket(qemuMigrationIOThreadPtr io, int sock)
> +{
> +    int rv = -1;
> +    char action = 'u';
> +
> +    io->unixSock = sock;
> +
> +    if (safewrite(io->wakeupSendFD, &action, 1) != 1) {
> +        virReportSystemError(errno, "%s",
> +                             _("failed to update migration tunnel"));
> +        goto error;
> +    }
> +
> +    rv = 0;
> +
> + error:
> +    return rv;
> +}
> +
> +static int
>  qemuMigrationConnect(virQEMUDriverPtr driver,
>                       virDomainObjPtr vm,
>                       qemuMigrationSpecPtr spec)
> @@ -4313,6 +4336,16 @@ qemuMigrationRun(virQEMUDriverPtr driver,
>      if (qemuDomainMigrateGraphicsRelocate(driver, vm, mig, graphicsuri) < 0)
>          VIR_WARN("unable to provide data for graphics client relocation");
>  
> +    if (spec->fwdType != MIGRATION_FWD_DIRECT) {
> +        if (!(iothread = qemuMigrationStartTunnel(spec->fwd.stream)))
> +            goto cancel;
> +
> +        if (nmigrate_disks &&
> +            qemuMigrationSetUnixSocket(iothread,
> +                                       spec->nbd_tunnel_unix_socket.sock) < 0)
> +            goto cancel;
> +    }
> +

So again here we seem to be doing two things at one time... In
particular moving the creation/setup of the Tunnel to much sooner in the
processing... before even entering the monitor...

I would think that going to 'cancel' is not the right failure step too.

Why is there 'nmigrate_disks' and 'mig->nbd'?  If the destination
doesn't support NBD, then one would think the nbd_tunnel_unix_socket
wouldn't have been created, but I don't recall that check being made in
earlier patches.

Perhaps the above lines need to be separated and the setting of the
socket for the ndb_tunnel* only done if mig->nbd is true below.

That way you're not setting that without also setting the migrate_flags
for the *DISK|*INC bits...

John
>      if (migrate_flags & (QEMU_MONITOR_MIGRATE_NON_SHARED_DISK |
>                           QEMU_MONITOR_MIGRATE_NON_SHARED_INC)) {
>          bool dest_host = spec->destType == MIGRATION_DEST_HOST;
> @@ -4444,9 +4477,6 @@ qemuMigrationRun(virQEMUDriverPtr driver,
>      }
>  
>      if (spec->fwdType != MIGRATION_FWD_DIRECT) {
> -        if (!(iothread = qemuMigrationStartTunnel(spec->fwd.stream)))
> -            goto cancel;
> -
>          if (qemuMigrationSetQEMUSocket(iothread, fd) < 0)
>              goto cancel;
>          /* If we've created a tunnel, then the 'fd' will be closed in the
> 

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]