I would try to snapshot more than only live / running VMs’ committed writes. I host all of my VMs at current in ZFS and snapshot regularly, I’m a fan of this method. In the same script, however, I also throw an empty indicator file (“on” or “off”) in the mount
point prior to snapshotting. This way, I can run a simple `find /dataset/path/ -name “on”` to find all the ‘good’ restore points to clone from, if need be (you could also just append it to the snapshot name).
I’ve had some difficulties getting live / running snapshots to work in a manner that doesn’t cause my servers to freeze for a few moments. I’d definitely like to learn the proper way to do this; I’d like to take regular (e.g. hourly) live quiesced
backups of Windows- and Linux-based servers that store (external) state data in a subdirectory at the disks’ mount point in my storage array, rather than the default location (which is not all that useful as it’d fill up my host’s OS drive). But again, I’ve
found that I experience network traffic loss to the VMs with the current flags.
109
eval
"virsh snapshot-create-as --domain
$VM --atomic —memspec file=$path/$(MEM_FILE),snapshot=external$devs"
where `$devs` is a string that’s built as
95
# Append all the disks to a string iteratively.
96
devs=""
97
for dev
in
$(virsh domblklist
"$VM"
|
tail -n +3
| head -n
-1 \
98
| awk
'{ print $1 }')
99
do
100
devs="$devs
--diskspec $dev,snapshot=external,file=$path/$(DISK_FILE
"$dev")”
101 done
I could probably clean this up or make it simpler somehow (wrote it quite a while ago), but I’d appreciate any / all suggestions.
Brandon
|
_______________________________________________ libvirt-users mailing list libvirt-users@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvirt-users