Re: [PATCH 3/4] qemu snapshot: use QMP snapshot-save/delete for internal snapshots

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 16, 2024 at 01:42:27AM +0300, Nikolai Barybin via Devel wrote:
> The usage of HMP commands are highly discouraged by qemu. Moreover,
> current snapshot creation routine does not provide flexibility in
> choosing target device for VM state snapshot.
> 
> This patch makes use of QMP commands snapshot-save/delete and by
> default chooses first writable disk (if present) as target for VM
> state, NVRAM - otherwise.
> 
> Signed-off-by: Nikolai Barybin <nikolai.barybin@xxxxxxxxxxxxx>
> ---
>  src/qemu/qemu_snapshot.c | 158 ++++++++++++++++++++++++++++++++++++---
>  1 file changed, 148 insertions(+), 10 deletions(-)
> 
> diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
> index f5260c4a22..83949a9a27 100644
> --- a/src/qemu/qemu_snapshot.c
> +++ b/src/qemu/qemu_snapshot.c
> @@ -308,6 +308,96 @@ qemuSnapshotCreateInactiveExternal(virQEMUDriver *driver,
>      return ret;
>  }
>  
> +static int
> +qemuSnapshotActiveInternalGetWrdevListHelper(virDomainObj *vm,
> +                                             char **wrdevs)
> +{
> +    size_t wrdevCount = 0;
> +    size_t i = 0;
> +
> +    for (i = 0; i < vm->def->ndisks; i++) {
> +        virDomainDiskDef *disk = vm->def->disks[i];
> +        if (!disk->src->readonly) {
> +            wrdevs[wrdevCount] = g_strdup(disk->src->nodenameformat);
> +            wrdevCount++;
> +        }
> +    }

Being !readonly is not sufficient criteria form allowing use of internal
snapshots. It also must be !shareable. It also must be of a format that
permits snapshots, ie effectively only qcow2.

We should likely deny snapshots entirely if there are any shareable disks,
since I can't see it being conceptually sensible to snapshot a VM while
excluding shared writable disks.

Likewise dney snapshots if any writable disk is non-qcow2, as snapshots
need to be consistent across the entire VM writable storage set.

> +
> +    if (wrdevCount == 0) {
> +        if (vm->def->os.loader->nvram) {
> +            wrdevs[0] = g_strdup(vm->def->os.loader->nvram->nodenameformat);

This doesn't make sense IMHO.

If there are not writable disks, we shouldn't be trying to snapshot at
all.

> +        } else {
> +            virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
> +                           _("no writable device for internal snapshot creation/deletion"));
> +            return -1;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
> +
> +static int
> +qemuSnapshotCreateActiveInternalDone(virDomainObj *vm)
> +{
> +    qemuBlockJobData *job = NULL;
> +    qemuDomainObjPrivate *priv = vm->privateData;
> +
> +    if (!(job = virHashLookup(priv->blockjobs, g_strdup_printf("snapsave%d", vm->def->id)))) {
> +        virReportError(VIR_ERR_INTERNAL_ERROR,
> +                       _("failed to lookup blockjob 'snapsave%1$d'"), vm->def->id);
> +        return -1;
> +    }
> +
> +    qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_NONE);
> +    if (job->state == VIR_DOMAIN_BLOCK_JOB_FAILED) {
> +        virReportError(VIR_ERR_INTERNAL_ERROR,
> +                       _("snapshot-save job failed: %1$s"), NULLSTR(job->errmsg));
> +        return -1;
> +    }
> +
> +    return job->state == VIR_DOMAIN_BLOCK_JOB_COMPLETED ? 1 : 0;
> +}
> +
> +
> +static int
> +qemuSnapshotCreateActiveInternalStart(virDomainObj *vm,
> +                                      const char *name)
> +{
> +    qemuBlockJobData *job = NULL;
> +    g_autofree char** wrdevs = NULL;
> +    int ret = -1;
> +    int rc = 0;
> +
> +    wrdevs = g_new0(char *, vm->def->ndisks + 1);

IMHO the qemuSnapshotActiveInternalGetWrdevListHelper method should be
allocating this, to the correct size it needs.

> +    if (qemuSnapshotActiveInternalGetWrdevListHelper(vm, wrdevs) < 0)
> +        return -1;
> +
> +    if (!(job = qemuBlockJobDiskNew(vm, NULL, QEMU_BLOCKJOB_TYPE_SNAPSHOT_SAVE,
> +                                    g_strdup_printf("snapsave%d", vm->def->id)))) {
> +        virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("failed to create new blockjob"));
> +        return -1;
> +    }
> +
> +    qemuBlockJobSyncBegin(job);
> +    if (qemuDomainObjEnterMonitorAsync(vm, VIR_ASYNC_JOB_SNAPSHOT) < 0) {
> +        ret = -1;
> +        goto cleanup;
> +    }
> +
> +    rc = qemuMonitorSnapshotSave(qemuDomainGetMonitor(vm), job->name,
> +                                 name, wrdevs[0], wrdevs);
> +    qemuDomainObjExitMonitor(vm);
> +    if (rc == 0) {
> +        qemuBlockJobStarted(job, vm);
> +        ret = 0;
> +    }
> +
> + cleanup:
> +    qemuBlockJobStartupFinalize(vm, job);
> +    return ret;
> +}
> +

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux