Re: libvirtd + rbd - stale kvm after migrate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Florian,

live migration with rbd images usually works fine. A few recommendations:

- You should not map the image on the host, while using it in a VM
with the qemu driver.
- For testing I would remove the ISO-Image from your VM. (Not sure if
that matters).

Also I'm not using cephx authentication. I don't know if the
credentials are available on the second host, as you don't specifiy
them in the libvirt.xml.

Viele Grüße
Christian

2011/12/8 Smart Weblications GmbH - Florian Wiessner
<f.wiessner@xxxxxxxxxxxxxxxxxxxxx>:
> Hi List,
>
>
> i set up a 4 node cluster using corosync, pacemaker and ceph, created a 160gb
> rbd image file with qemu-img and started up one virtual machine using qemu-kvm.
>
> The virtual machine runs fine until i isse crm node standby on the running host.
> I can see the VM to migrate to another host and start up without issues, but the
> vm seems to be unable to access the disk so all processes inside the VM are
> hanging waiting on disk-io.
>
> If i migrate the VM back to the host where i started the VM, it runs normally
> without any problems. It seems that the other host is unable to use the rbd
> image - is there anything i am missing here?
>
> I also tried to use a file mounted on ceph as image, but then the node to which
> the VM wants to migrate gets kernel-oops when trying to access the image file
> and locks up :(
>
> Any help would be highly appreciated!
>
> ceph --version
> ceph version 0.39-54-g745be30 (commit:745be30f517216474d83b9ada2f355217a984258)
> virsh --version
> 0.9.8
> qemu-system-x86_64 --version
> QEMU emulator version 1.0.50, Copyright (c) 2003-2008 Fabrice Bellard
> rbd --version
> ceph version 0.39-54-g745be30 (commit:745be30f517216474d83b9ada2f355217a984258)
>
> Executing  rbd  showmapped  on node01
> id      pool    image   snap    device
> 0       rbd     ns1     -       /dev/rbd0
> Executing  rbd  showmapped  on node02
> id      pool    image   snap    device
> 0       rbd     ns1     -       /dev/rbd0
> Executing  rbd  showmapped  on node03
> id      pool    image   snap    device
> 0       rbd     ns1     -       /dev/rbd0
> Executing  rbd  showmapped  on node04
> id      pool    image   snap    device
> 0       rbd     ns1     -       /dev/rbd0
>
> cat /etc/ceph/ceph.conf
> [global]
>       pid file = /var/run/ceph/$name.pid
>       debug ms = 1
>        auth supported = cephx
>       osd journal = /data/ceph.journal
>       osd_journal_size = 512
> #       filestore journal writeahead = true
> #       filestore journal parallel = true
>        mds max = 4
>
> [mon]
>       mon data = /data/ceph/mon
> [mon.0]
>       host = node01
>       mon addr = xxx.xxx.xxx.4:6789
> [mon.1]
>       host = node02
>       mon addr = xxx.xxx.xxx.5:6789
> [mon.2]
>       host = node03
>       mon addr = xxx.xxx.xxx.6:6789
> [mon.3]
>       host = node04
>       mon addr = xxx.xxx.xxx.7:6789
>
> [mds]
>        keyring = /etc/ceph/keyring.$name
> #       mds dir max commit size 32
>
> [mds.0]
>       host = node01
> [mds.1]
>       host = node02
> [mds.2]
>       host = node03
> [mds.3]
>       host = node04
>
>
> [osd]
>       sudo = true
>       osd data = /data/ceph/osd
>        keyring = /etc/ceph/keyring.$name
> [osd.0]
>       host = node01
> [osd.1]
>       host = node02
> [osd.2]
>       host = node03
> [osd.3]
>       host = node04
>
> ns1.xml:
> <domain type='kvm'>
>  <name>ns1</name>
>  <uuid>350e51e8-2fe5-274f-76c4-58b237bc0fba</uuid>
>  <memory>1048576</memory>
>  <currentMemory>524288</currentMemory>
>  <vcpu>2</vcpu>
>  <os>
>    <type arch='x86_64' machine='pc-0.12'>hvm</type>
>    <boot dev='hd'/>
>  </os>
>  <features>
>    <acpi/>
>    <apic/>
>    <pae/>
>  </features>
>  <clock offset='utc'/>
>  <on_poweroff>destroy</on_poweroff>
>  <on_reboot>restart</on_reboot>
>  <on_crash>restart</on_crash>
>  <devices>
>    <emulator>/usr/local/bin/qemu-system-x86_64</emulator>
>    <disk type='file' device='cdrom'>
>      <driver name='qemu' type='raw'/>
>      <source file='/ceph/debian-6.0.3-amd64-netinst.iso'/>
>      <target dev='hdc' bus='ide'/>
>      <readonly/>
>      <address type='drive' controller='0' bus='1' unit='0'/>
>    </disk>
>    <disk type='network' device='disk'>
>      <driver name='qemu' type='raw'/>
>      <source protocol='rbd' name='rbd/ns1'/>
>      <target dev='vda' bus='virtio'/>
>      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
>    </disk>
>    <controller type='ide' index='0'>
>      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
>    </controller>
>    <interface type='bridge'>
>      <mac address='52:54:00:d2:ec:15'/>
>      <source bridge='br0'/>
>      <model type='virtio'/>
>      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
>    </interface>
>    <input type='tablet' bus='usb'/>
>    <input type='mouse' bus='ps2'/>
>    <graphics type='vnc' port='5900' autoport='no'/>
>    <video>
>      <model type='cirrus' vram='9216' heads='1'/>
>      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
>    </video>
>    <memballoon model='virtio'>
>      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
>    </memballoon>
>  </devices>
> </domain>
>
> --
>
> Mit freundlichen Grüßen,
>
> Florian Wiessner
>
> Smart Weblications GmbH
> Martinsberger Str. 1
> D-95119 Naila
>
> fon.: +49 9282 9638 200
> fax.: +49 9282 9638 205
> 24/7: +49 900 144 000 00 - 0,99 EUR/Min*
> http://www.smart-weblications.de
>
> --
> Sitz der Gesellschaft: Naila
> Geschäftsführer: Florian Wiessner
> HRB-Nr.: HRB 3840 Amtsgericht Hof
> *aus dem dt. Festnetz, ggf. abweichende Preise aus dem Mobilfunknetz
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux