Re: [PATCH] qemu: snapshot: Wire up revert for disk only snapshots

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/13/2012 08:30 AM, Cole Robinson wrote:
> This just uses the existing plumping to update the guest XML to point
> at the chosen snapshot disk images. It requires the VM to be inactive.

Why limit to just offline?  We already support reverting to offline
checkpoint snapshots from a running domain (where the code already takes
care of shutting down the running domain on your behalf) - you are
losing runtime state, but the fact that you are giving up state is
obvious based on the fact that you asked to revert, so it is not silent
data loss.  However, I could possibly see gating this action on
VIR_DOMAIN_SNAPSHOT_REVERT_FORCE, since stopping a running domain to go
back to a state that probably requires a disk fsck may count as an
unsafe action worth protecting by the extra flag.

> ---
> 
> Reverting to a snapshot with children and then running the guest is
> probably unsafe, since the backing image will change state on the child
> snapshots. How do internal snapshots work around this?

Internal snapshots work by not losing state.  You have to remember that
in qcow2, there is an implicit 'current' state (no name), and then each
snapshot is another entry point into a list of sectors in use at that
point in time, where a sector can be referenced by more than one entry
point.  That is, if I do:

snapshot-create-as $dom a
snapshot-create-as $dom b
snapshot-revert $dom a

then what is happening inside the qcow2 image is:

refcount 1, owned by 'current' - can be modified directly
refcount 1, owned by 'a' or owned by 'b' - doesn't matter to current
refcount 2, owned by 'a' and 'b' - doesn't matter to current
refcount 2, owned by 'a' and current or 'b' and current - sector is now
copy-on-write; if current wants to modify the sector, a new sector is
allocated with refcount 1 for current, and the old sector decreases
refcount by 1
refcount 3, owned by 'a', 'b', and current - same story

There is no chaining, so each sector plus its refcount stands in
isolation, and even if I then do:

snapshot-delete $dom a
snapshot-create-as $dom a

to repoint 'a' to a different set of sectors, I still have not impacted
the state reached by entry point 'b'.

But with external snapshots, you are absolutely correct that reverting
to a parent file _and then modifying that file_  will invalidate all
previous children.

In the past, I have proposed several flags to be added to the API to
take care of this.  One is the ability to create a new child from an
existing parent.  That is, if we have external snapshot 'a' with child
'b', and we want to revert to the state of 'a', then we would really
create a NEW child 'c' as another external snapshot but starting life
with all content coming from the backing file of 'a'.  The other is the
ability to request what happens to existing children when reverting to a
snapshot - what needs to happen is that the revert is refused unless a
flag is passed that states that all remaining children are discarded
because the revert action will be invalidating them.

I guess I need to dig out my work on that front, although your patch
will certainly help me in doing it.

>  src/qemu/qemu_driver.c | 72 ++++++++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 64 insertions(+), 8 deletions(-)
> 
> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
> index 9bf89bb..53f5340 100644
> --- a/src/qemu/qemu_driver.c
> +++ b/src/qemu/qemu_driver.c
> @@ -11205,6 +11205,65 @@ qemuDomainSnapshotRevertInactive(struct qemud_driver *driver,
>      return ret > 0 ? -1 : ret;
>  }
>  
> +/* The domain is expected to be locked */
> +static int qemuDomainRevertToSnapshotDisk(virDomainSnapshotObjPtr snap,
> +                                          virDomainObjPtr vm,
> +                                          virDomainDefPtr config,
> +                                          unsigned int flags)
> +{
> +    int ret = -1;
> +    int i;
> +
> +    if (virDomainObjIsActive(vm)) {
> +        virReportError(VIR_ERR_OPERATION_INVALID, "%s",
> +                       _("domain must be stopped to revert to a "
> +                         "disk only snapshot"));
> +        goto cleanup;
> +    }

This should not be necessary, given my above arguments that the calling
function should have already stopped any running domain.

> +
> +    if (flags & (VIR_DOMAIN_SNAPSHOT_REVERT_RUNNING |
> +                 VIR_DOMAIN_SNAPSHOT_REVERT_PAUSED)) {
> +        virReportError(VIR_ERR_OPERATION_INVALID, "%s",
> +                       _("domain can only be shutoff after reverting to a"
> +                         "disk only snapshot"));
> +        goto cleanup;
> +    }

No, these flags say that _after_ reverting to the disk state, we then
boot it immediately.  They should therefore be allowed.  In fact, for a
transient guest, this is the only feasible action, since transient
guests are not allowed to be offline.

-- 
Eric Blake   eblake@xxxxxxxxxx    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list

[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]