hi Steffen,
you meant the live VM migration with ceph disk? and this should be discussed on qemu-kvm list, and i have tested, it works fine.
ceph have the rbd snap, we can use it, but some qemu-kvm features which we developed are based on qemu snapshots, so required qcow2 or qemu snapshot.. .
On Fri, Jan 29, 2016 at 2:01 AM, Steffen Weißgerber <WeissgerberS@xxxxxxx> wrote:
>>> Bill WONG <wongahshuen@xxxxxxxxx> schrieb am Donnerstag, 28. Januar
2016 um
09:30:
> Hi Marius,
>
Hello,
> with ceph rdb, it looks can support qcow2 as well as per its
document: -
> http://docs.ceph.com/docs/master/rbd/qemu-rbd/
> --
> Important The raw data format is really the only sensible format
option to
> use with RBD. Technically, you could use other QEMU-supported formats
(such
> as qcow2 or vmdk), but doing so would add additional overhead, and
would
> also render the volume unsafe for virtual machine live migration
when
> caching (see below) is enabled.
Normally my question would be off topic in this list, but I asked it
already in the qemu list
and got no answer:
Is there documentation available on how to do live migration on rbd
disks
with the qemu-monitor?
> ---
>
> without having qcow2, the qemu-kvm cannot make snapshot and other
> features.... anyone have ideas or experiences on this?
> thank you!
>
Why using qemu snapshots when rbd-snapshots are available?
Regards
Steffen
>> Marius Vaitiek*nas
>
> On Thu, Jan 28, 2016 at 3:54 PM, Marius Vaitiekunas <
> mariusvaitiekunas@xxxxxxxxx> wrote:
>
>> Hi,
>>
>> With ceph rbd you should use raw image format. As i know qcow2 is
not
>> supported.
>>
>> On Thu, Jan 28, 2016 at 6:21 AM, Bill WONG <wongahshuen@xxxxxxxxx>
wrote:
>>
>>> Hi Simon,
>>>
>>> i have installed ceph package into the compute node, but it looks
qcow2
>>> format is unable to create.. it show error with : Could not write
qcow2
>>> header: Invalid argument
>>>
>>> ---
>>> qemu-img create -f qcow2 rbd:storage1/CentOS7-3 10G
>>> Formatting 'rbd:storage1/CentOS7-3', fmt=qcow2 size=10737418240
>>> encryption=off cluster_size=65536 lazy_refcounts=off
refcount_bits=16
>>> qemu-img: rbd:storage1/CentOS7-3: Could not write qcow2 header:
Invalid
>>> argument
>>> ---
>>>
>>> any ideas?
>>> thank you!
>>>
>>> On Thu, Jan 28, 2016 at 1:01 AM, Simon Ironside
<sironside@xxxxxxxxxxxxx>
>>> wrote:
>>>
>>>> On 27/01/16 16:51, Bill WONG wrote:
>>>>
>>>> i have ceph cluster and KVM in different machine.... the qemu-kvm
>>>>> (CentOS7) is dedicated compute node installed with qemu-kvm +
libvirtd
>>>>> only, there should be no /etc/ceph/ceph.conf
>>>>>
>>>>
>>>> Likewise, my compute nodes are separate machines from the
OSDs/monitors
>>>> but the compute nodes still have the ceph package installed and
>>>> /etc/ceph/ceph.conf present. They just aren't running any ceph
daemons.
>>>>
>>>> I give the compute nodes their own ceph key with write access to
the
>>>> pool for VM storage and read access to the monitors. I can then
use ceph
>>>> status, rbd create, qemu-img etc directly on the compute nodes.
>>>>
>>>> Cheers,
>>>> Simon.
>>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>>
>> --
>>
--
Klinik-Service Neubrandenburg GmbH
Allendestr. 30, 17036 Neubrandenburg
Amtsgericht Neubrandenburg, HRB 2457
Geschaeftsfuehrerin: Gudrun Kappich
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com