On 25/05/12 18:17, Eric Blake wrote:
On 05/25/2012 07:48 AM, NoxDaFox wrote:
Hello everybody,
I would like to be able to spawn several domains from a given snapshot.
So would I. It's not yet possible from libvirt, but we are getting
closer as we discuss ideas of how to do it.
Is there any link where I can follow the discussion about this topic?
Here's a possible scenario:
- I start from disk image A.qcow2.
- I made some changes to A and I take different snapshots: 1 - 2 - 3
Disk snapshots, or full system checkpoint snapshots with VM state?
Disk snapshot, the full resume would be a cool feature as well but I
think that the time spent to realize it may be too much atm compared to
the business cases that could benefit from it.
- While A is still running I would like to run domains B and C from snapshot 2
Do you want B and C starting from A's memory state (hard) or just from
A's disk state (easier)?
As I said disk state. The case is what I wrote before: "Lots of cpu
cycles saved once new image versions must be deployed (typical case a
windows update to propagate in the nodes network)."
Imagine how painful is to maintain in a cloud system OS and software
updates.
I don't want to revert a domain, I want to create new ones. The
original should be still running but it may be stopped if necessary.
The idea I had was to create a qcow2 image through qemu-img (let's
call it Z.qcow2), as I'm using copy on write for performances then I'd
need to commit the changes contained in Z to A always using qemu-img.
Is there any other better way? I would love to do it through libvirt,
maybe specifying in the config file the backing store path of the
whole disk and then giving as a source file the one containing deltas.
It sounds like you want the following qcow2 hierarchy:
A.base<- 1.snap<- 2.snap<- 3.snap
\- B.snap
As long as 2.snap is read-only for both A and B, then 3.snap (used by A)
and B.snap (used by B) can safely diverge in contents.
Indeed is read only, to be clear: I don't need anything magical atm,
just a "domain lifecycle branching" from disk images.
Glossary is quite confusing but for what I understood #2 snapshot disk
image should be the "backing file", here comes the problem: as I'm using
copy on write snapshot #2 image is not the real backing file, without
cow would be really easy.
What my system is capable to do atm is: with or without cow, take a
snapshot and automatically convert it into a qcow2 disk image through
qemu-img snapshot conversion; the resulting disk is used for analysis
purposes but would be nice to use it also for this kind of scenario.
Everything is already possible if I don't use cow, but you can easily
guess in this case how performance suffers.
This could be a good idea for realizing a cloud based system as a
incredible amount of time would be saved though it.
Imagine moving only qcow2 file containing deltas through the network
and then give the base image to each node so that it can use it for
start it's own domain.
Yes, thin provisioning via common base images is already part of VDSM
management of clouds.
As well would be possible to store the heavy base image in a single
node saving storage space.
Lots of cpu cycles saved once new image versions must be deployed
(typical case a windows update to propagate in the nodes network).
_______________________________________________
libvirt-users mailing list
libvirt-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvirt-users