Re: Ceph and Qemu

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If I understood you right, your approach is a suspend VM via ACPI
mechanism, then dump core, then restore it - this should be longer
than simple coredump due timings for guest OS to sleep/resume, which
seems unnecessary. Copy-on-write mechanism should reduce downtime to
very acceptable values but unfortunately I do not heard of such
mechanism except academic projects.

On Sat, May 18, 2013 at 5:48 PM, Dzianis Kahanovich
<mahatma@xxxxxxxxxxxxxx> wrote:
> IMHO interaction QEMU & kernel's FREEZER (part of hibernation & cgroups) can
> solve many of problems. It can be done via QEMU host-2-guest sockets and scripts
> or embedded into virtual hardware (simulate real "suspend" behavior).
>
> Andrey Korolyov пишет:
>> Hello,
>>
>> I`ve thought of the same mechanism a lot ago. After couple of tests I
>> have concluded that coredump should be done not to Ceph directly, but
>> to the tmpfs catalog to decrease VM` idle timeout(explicitly for QEMU,
>> if other hypervisor able to work with Ceph backend and has COW-like
>> memory snapshotting mechanism, time of the 'flush' of the coredump
>> does not matter). Anyway, with QEMU relatively simple shell script
>> should do the thing.
>>
>> On Sat, May 18, 2013 at 4:51 PM, Jens Kristian Søgaard
>> <jens@xxxxxxxxxxxxxxxxxxxx> wrote:
>>> Hi guys,
>>>
>>> I was wondering if anyone has done some work on saving qemu VM state (RAM,
>>> registers, etc.) on Ceph itself?
>>>
>>> The purpose for me would be to enable easy backups of non-cooperating VMs -
>>> i.e. without the need to quiesce file systems, databases, etc.
>>>
>>> I'm thinking an automated process which pauses the VM, flushes the Ceph
>>> writeback cache (if any), snapshots the rbd image and saves the VM state on
>>> Ceph as well. I imagine this should only take a very short amount of time,
>>> and then the VM can be unpaused and continue with minimal interruption.
>>>
>>> The new Ceph export command could then be used to store that backup on a
>>> secondary Ceph cluster or on simple storage.
>>>
>>> --
>>> Jens Kristian Søgaard, Mermaid Consulting ApS,
>>> jens@xxxxxxxxxxxxxxxxxxxx,
>>> http://www.mermaidconsulting.com/
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
> WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux