Re: Internal Qemu snapshots with RBD and libvirt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 19 Jul 2013, Josh Durgin wrote:
> On 07/18/2013 08:21 AM, Wido den Hollander wrote:
> > Hi,
> > 
> > I'm working on the RBD integration for CloudStack 4.2 and now I got to
> > the point snapshotting.
> > 
> > The "problem" is that CloudStack uses libvirt for snapshotting
> > Instances, but Qemu/libvirt also tries to store the memory contents of
> > the domain to assure the snapshot is consistent.
> > 
> > So the way libvirt tries to do it is not possible with RBD right now,
> > since there is no way to store the internal memory.

It seems like the way to view this is that to snapshot a VM, we need to 
snapshot all N block devices attached to it, plus the internal memory.  
It's not that there is something missing from the RBD block device 
snapshot function, but that it is not clear where to put the memory at 
all.

Maybe the libvirt or qemu VM metadata should specify a separate image 
target for the RAM?  How is this normally done when you're using, say, 
qcow2?  It is assumed that it can be somehow stored with the first block 
device or something?

sage

> > 
> > I was thinking about using the Java librbd bindings to create the
> > snapshot, but that will not be consistent thus not 100% safe, so I'd
> > rather avoid that.
> > 
> > How is this done in OpenStack? Or are you facing similar issues?
> 
> OpenStack doesn't store the memory contents of a domain. For volume
> snapshots, it requires that the volume is detached, so there can be
> no inconsistency, and the actual snapshot handling is done by the volume
> driver in cinder, so libvirt is not involved at all. It just uses the
> rbd command (or now the python bindings).
> 
> > P.S.: I'm testing with libvirt 1.0.6 from the Ubuntu Cloud Team archive
> > with packages for OpenStack Havana.
> > 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux