Hi,
I`ve thought of the same mechanism a lot ago. After couple of tests I have concluded that coredump should be done not to Ceph directly, but to the tmpfs catalog to decrease VM` idle timeout(explicitly for QEMU,
I see that more like as an implementation detail - i.e. the state is initially saved to RAM (or tmpfs/ramdisk) - and then afterwards committed to Ceph storage.
The part I was interested in was if someone had looked at a way to store the disk image together with the state as one unit in Ceph. This would make it a easier to manage and backup.
As far as I have understood it, it is not immediately possible to use qcow2 on top of Ceph with qemu-kvm and librbd. I don't see why this should not be possible in theory - and that would make it easy to store the state alongside the disk image.
Am I wrong in assuming that it is not possible to layer qcow2 on top of rbd with qemu-kvm and librbd?
-- Jens Kristian Søgaard, Mermaid Consulting ApS, jens@xxxxxxxxxxxxxxxxxxxx, http://www.mermaidconsulting.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com