Re: [libvirt] [PATCH v2 4/5] Implement virDomainMigrateSetMaxDowntime in qemu driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 18, 2010 at 08:12:47PM +0100, Jiri Denemark wrote:
> ---
>  src/qemu/qemu_driver.c       |   70 +++++++++++++++++++++++++++++++++++++++++-
>  src/qemu/qemu_monitor.c      |   15 +++++++++
>  src/qemu/qemu_monitor.h      |    3 ++
>  src/qemu/qemu_monitor_json.c |   29 +++++++++++++++++
>  src/qemu/qemu_monitor_json.h |    3 ++
>  src/qemu/qemu_monitor_text.c |   27 ++++++++++++++++
>  src/qemu/qemu_monitor_text.h |    3 ++
>  7 files changed, 149 insertions(+), 1 deletions(-)
> 
> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
> index 4cb47f7..d04d9bf 100644
> --- a/src/qemu/qemu_driver.c
> +++ b/src/qemu/qemu_driver.c
> @@ -99,6 +99,11 @@ enum qemuDomainJob {
>  enum qemuDomainJobSignals {
>      QEMU_JOB_SIGNAL_CANCEL  = 1 << 0, /* Request job cancellation */
>      QEMU_JOB_SIGNAL_SUSPEND = 1 << 1, /* Request VM suspend to finish live migration offline */
> +    QEMU_JOB_SIGNAL_MIGRATE_DOWNTIME = 1 << 2, /* Request migration downtime change */
> +};
> +
> +struct qemuDomainJobSignalsData {
> +    unsigned long long migrateDowntime; /* Data for QEMU_JOB_SIGNAL_MIGRATE_DOWNTIME */
>  };
>  
>  typedef struct _qemuDomainObjPrivate qemuDomainObjPrivate;
> @@ -107,6 +112,7 @@ struct _qemuDomainObjPrivate {
>      virCond jobCond; /* Use in conjunction with main virDomainObjPtr lock */
>      enum qemuDomainJob jobActive;   /* Currently running job */
>      unsigned int jobSignals;        /* Signals for running job */
> +    struct qemuDomainJobSignalsData jobSignalsData; /* Signal specific data */
>      virDomainJobInfo jobInfo;
>      unsigned long long jobStart;

  I think I would have had an easier time understanding the patch if
with some explanation that we use jobSignalsData to indicate the user
desired change and that it would be applied once the background
migration job wakes up and look at the data, then passes it to qemu.

[...]
> @@ -4061,6 +4070,17 @@ qemuDomainWaitForMigrationComplete(struct qemud_driver *driver, virDomainObjPtr
>              VIR_DEBUG0("Pausing domain for non-live migration");
>              if (qemuDomainMigrateOffline(driver, vm) < 0)
>                  VIR_WARN0("Unable to pause domain");
> +        } else if (priv->jobSignals & QEMU_JOB_SIGNAL_MIGRATE_DOWNTIME) {
> +            unsigned long long ns = priv->jobSignalsData.migrateDowntime;
> +
> +            priv->jobSignals ^= QEMU_JOB_SIGNAL_MIGRATE_DOWNTIME;
> +            priv->jobSignalsData.migrateDowntime = 0;
> +            VIR_DEBUG("Setting migration downtime to %lluns", ns);
> +            qemuDomainObjEnterMonitorWithDriver(driver, vm);
> +            rc = qemuMonitorSetMigrationDowntime(priv->mon, ns);
> +            qemuDomainObjExitMonitorWithDriver(driver, vm);
> +            if (rc < 0)
> +                VIR_WARN0("Unable to set migration downtime");
>          }
>  
>          qemuDomainObjEnterMonitorWithDriver(driver, vm);
> @@ -9516,6 +9536,54 @@ cleanup:
>  }
>  
>  
> +static int
> +qemuDomainMigrateSetMaxDowntime(virDomainPtr dom,
> +                                unsigned long long downtime,
> +                                unsigned int flags ATTRIBUTE_UNUSED)
> +{
> +    struct qemud_driver *driver = dom->conn->privateData;
> +    virDomainObjPtr vm;
> +    qemuDomainObjPrivatePtr priv;
> +    int ret = -1;
> +
> +    qemuDriverLock(driver);
[...]
> +    priv = vm->privateData;
> +
> +    if (priv->jobActive != QEMU_JOB_MIGRATION) {
> +        qemuReportError(VIR_ERR_OPERATION_INVALID,
> +                        "%s", _("domain is not being migrated"));
> +        goto cleanup;
> +    }
> +
> +    VIR_DEBUG("Requesting migration downtime change to %lluns", downtime);
> +    priv->jobSignals |= QEMU_JOB_SIGNAL_MIGRATE_DOWNTIME;
> +    priv->jobSignalsData.migrateDowntime = downtime;

  There is something I donc fully see in this, which is how the
producer/consumer for priv->jobSignals are synchronized.
I would actually swap those 2 statements if there is no synchronization
to make sure we don't pick an uninitialized (or zeroed) value in
qemuDomainWaitForMigrationComplete(). I'm not clear if this can be
processed by 2 different threads

  Everything else looks just fine to me,

ACK,

Daniel

-- 
Daniel Veillard      | libxml Gnome XML XSLT toolkit  http://xmlsoft.org/
daniel@xxxxxxxxxxxx  | Rpmfind RPM search engine http://rpmfind.net/
http://veillard.com/ | virtualization library  http://libvirt.org/

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list

[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]