Speed is not critical even for usual snapshot. I don't look into qemu code, but x86 arch normal fork process is in splitting virtual address space and copy-on-write both "compacted" copies. So, "snapshot" is momental in time (mean not copy nothing real RAM, just fix descriptors), but forked copy can be writed with any speed. But, for example, untuned Linux guest is too sensitive to time drifts. So, live migration near good only with nohz=off, clock=acpi_pm, etc. And frequently vm rebooting on ceph failure (1/3 node with 2/1 or 3/2 replication size) - I tune it, but do not figure out final solution. Windows usually still live on any random ceph freezes. Default Linux do heavy use of very precise HR timers and schedulers, so best workaround - "guest cooperation" ;) to freeze itself. Andrey Korolyov пишет: > If I understood you right, your approach is a suspend VM via ACPI > mechanism, then dump core, then restore it - this should be longer > than simple coredump due timings for guest OS to sleep/resume, which > seems unnecessary. Copy-on-write mechanism should reduce downtime to > very acceptable values but unfortunately I do not heard of such > mechanism except academic projects. > > On Sat, May 18, 2013 at 5:48 PM, Dzianis Kahanovich > <mahatma@xxxxxxxxxxxxxx> wrote: >> IMHO interaction QEMU & kernel's FREEZER (part of hibernation & cgroups) can >> solve many of problems. It can be done via QEMU host-2-guest sockets and scripts >> or embedded into virtual hardware (simulate real "suspend" behavior). >> >> Andrey Korolyov пишет: >>> Hello, >>> >>> I`ve thought of the same mechanism a lot ago. After couple of tests I >>> have concluded that coredump should be done not to Ceph directly, but >>> to the tmpfs catalog to decrease VM` idle timeout(explicitly for QEMU, >>> if other hypervisor able to work with Ceph backend and has COW-like >>> memory snapshotting mechanism, time of the 'flush' of the coredump >>> does not matter). Anyway, with QEMU relatively simple shell script >>> should do the thing. >>> >>> On Sat, May 18, 2013 at 4:51 PM, Jens Kristian Søgaard >>> <jens@xxxxxxxxxxxxxxxxxxxx> wrote: >>>> Hi guys, >>>> >>>> I was wondering if anyone has done some work on saving qemu VM state (RAM, >>>> registers, etc.) on Ceph itself? >>>> >>>> The purpose for me would be to enable easy backups of non-cooperating VMs - >>>> i.e. without the need to quiesce file systems, databases, etc. >>>> >>>> I'm thinking an automated process which pauses the VM, flushes the Ceph >>>> writeback cache (if any), snapshots the rbd image and saves the VM state on >>>> Ceph as well. I imagine this should only take a very short amount of time, >>>> and then the VM can be unpaused and continue with minimal interruption. >>>> >>>> The new Ceph export command could then be used to store that backup on a >>>> secondary Ceph cluster or on simple storage. >>>> >>>> -- >>>> Jens Kristian Søgaard, Mermaid Consulting ApS, >>>> jens@xxxxxxxxxxxxxxxxxxxx, >>>> http://www.mermaidconsulting.com/ >>>> _______________________________________________ >>>> ceph-users mailing list >>>> ceph-users@xxxxxxxxxxxxxx >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users@xxxxxxxxxxxxxx >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> >>> >> >> >> -- >> WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/ >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com