Re: Create and revert to snapshot fails silently after device update on running domain

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eric,

I ran into this issue again in a slightly different scenario:

1) create a snapshot on running domain

then for backup purposes I do the following:

2) create an external disk snapshot without metadata, and copy off the original file to a backup device 3) do a blockpull operation so the external disk snapshot file now becomes the base file
4) delete the original file

Now activating the snapshot in the first step results in the same behaviour, no snapshot activation and no error is reported. I can imagine this is a similar problem, as the disk source files change (but this is not a hotplug event right?), so I have also tried:

2) create an external disk snapshot without metadata, and copy off the original file to a backup device 3) do a blockpull operation so the external disk snapshot file now becomes the base file
4) delete the original file
5) create a new external disk snapshot with the original pre-backup filename
6) do a blockpull operation again

In this case everything should be the same as prior to the backup, but still the snapshot does not activate. Stopping the domain and then activating the snapshot also does not work (it just starts the domain instead).

Any idea?

Kind regards,
- Jasper

On Thu, 04 Oct 2012 10:44:27 -0600, Eric Blake wrote:
On 09/26/2012 04:19 AM, Jasper Spit wrote:
Hi list,

I'm having an issue with snapshot creation. Scenario:

qemu 1.1
libvirt 0.9.12

I create a domain, and start it. The domain has 1 IDE cdrom device
defined (see below).
When started, I want to mount an ISO file to it. So I do
updateDeviceFlags in libvirt-python or update-device in virsh (both have
the same problem).

Both update approaches go through the same libvirt API, so it the action
of a live device hot-plug that is messing things up here.

This works fine, the ISO image becomes available to the domain. Now I create a snapshot on the still running domain using snapshotCreateXML in
libvirt-python or snapshot-create in virsh. The command returns
immediately without error (a normal snapshot takes several seconds to
complete).

That sounds like a bug, where libvirt should have given an error about
being unable to create a snapshot.

If I revert to this snapshot this command also returns
immediately without error, but the snapshot is not actually reverted to,

Probably fallout from the first bug - if the snapshot was never created in the first place, but libvirt went ahead and updated its metadata to
claim that the snapshot exists, then reverting will have nothing to
return to.

the domain remains running in the same state as if nothing had happened (I test this by verifying console output and checking if a testfile is present on the domain). If I do not use the update device commands prior
to creating a snapshot, all is well. If I remove the source from the
cdrom device using update device, the snapshots work properly again.

Any idea what causes this?

It sounds like qemu's 'savevm' command is not very good about handling saves after a hotplug event, and that libvirt isn't properly recognizing
this as a situation in which it must fail the snapshot.  I'll have to
try and reproduce the setup to see if I can come up with a libvirt
patch, but your steps look pretty detailed. Would you mind opening this
as a BZ, so it doesn't get lost?


Steps to reproduce using virsh:

virsh # start 4d5d722b-864c-657e-0f39-55d1bafc760e
Domain 4d5d722b-864c-657e-0f39-55d1bafc760e started

virsh # snapshot-create 4d5d722b-864c-657e-0f39-55d1bafc760e
Domain snapshot 1348653920 created

virsh # snapshot-revert 4d5d722b-864c-657e-0f39-55d1bafc760e 1348653920

All is good, the snapshot is reverted to properly. Now I update the
cdrom device:

virsh # update-device 4d5d722b-864c-657e-0f39-55d1bafc760e deb.xml
Device updated successfully

virsh # snapshot-create 4d5d722b-864c-657e-0f39-55d1bafc760e
Domain snapshot 1348654116 created

Command returns instantly.



virsh # dumpxml 4d5d722b-864c-657e-0f39-55d1bafc760e
<domain type='kvm' id='135'>

  <devices>
    <emulator>/usr/bin/kvm</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
<source file='/data/images/debian-live-6.0.4-amd64-standard.iso'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <alias name='ide0-1-0'/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
<source file='/data/domains/2f8baacd-563c-b747-b621-c0ddb4aa84bd'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
    </disk>

</domain>

deb.xml:

<disk type='file' device='cdrom'>
  <driver name='qemu' type='raw'/>
  <source file='/data/images/debian-live-6.0.4-amd64-standard.iso'/>
  <target dev='hdc' bus='ide'/>
</disk>

deb-off.xml:

<disk type='file' device='cdrom'>
  <driver name='qemu' type='raw'/>
  <target dev='hdc' bus='ide'/>
</disk>

Thanks much,

Hopefully we can get this resolved before the next libvirt release.

_______________________________________________
libvirt-users mailing list
libvirt-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvirt-users


[Index of Archives]     [Virt Tools]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux