Hi all,
This issue is also affecting us (centos6.5 based icehouse) and, as far as I could read, comes from the fact that the path /var/lib/nova/instances (or whatever configuration path you have in nova.conf) is not shared. Nova does not see this shared path and therefore does not allow to perform live migrate although all the required information is stored in ceph and in the qemu local state.
This issue is also affecting us (centos6.5 based icehouse) and, as far as I could read, comes from the fact that the path /var/lib/nova/instances (or whatever configuration path you have in nova.conf) is not shared. Nova does not see this shared path and therefore does not allow to perform live migrate although all the required information is stored in ceph and in the qemu local state.
Some people has "cheated" nova to see this as a shared path but I'm not confident about how this will affect stability.
Can someone confirm this deduction? What are the possible workarounds for this situation in a full ceph based environment (without shared path)?
Thanks in adance,
Samuel.
On 26 September 2014 09:20, Thomas Bernard <ml-ceph@xxxxxxxxxx> wrote:
Hi,
To ensure if compute-service are UP you should verify output of 'nova service-list'.
You can verify compute's log on source & destination and your NTP sync.
By default Openstack doest'nt allow live-migration, you need to modify live_migration_flag on nova.conf and add listen support to libvirt.
I use the same setup as you, but on Ubuntu12.04 (package are the same) and live-migration works perfectly.
Good luck.
---------------
Le 25/09/2014 17:34, Daniel Schneller a écrit :
Hi!
We have an Icehouse system running with librbd based Cinder and Glanceconfigurations, storing images and volumes in Ceph.
Configuration is (apart from network setup details, of course) by thebook / OpenStack setup guide.
Works very nicely, including regular migration, but live migration ofvirtual machines fails. I created a simple machine booting from a volumebased off the Ubuntu 14.04.1 cloud image for testing.
Using Horizon, I can move this VM from host to host, but when I try toLive Migrate it from one baremetal host to another, I get an errormessage “Failed to live migrate instance to host ’node02’".
The only related log entry I recognize is in the controller’s nova-api.log:
2014-09-25 17:15:47.679 3616 INFO nova.api.openstack.wsgi [req-f3dc3c2e-d366-40c5-a1f1-31db71afd87a f833f8e2d1104e66b9abe9923751dcf2 a908a95a87cc42cd87ff97da4733c414] HTTP exception thrown: Compute service of node02.baremetal.clusterb.centerdevice.local is unavailable at this time.2014-09-25 17:15:47.680 3616 INFO nova.osapi_compute.wsgi.server [req-f3dc3c2e-d366-40c5-a1f1-31db71afd87a f833f8e2d1104e66b9abe9923751dcf2 a908a95a87cc42cd87ff97da4733c414] 10.102.6.8 "POST /v2/a908a95a87cc42cd87ff97da4733c414/servers/0f762f35-64ee-461f-baa4-30f5de4d5ddf/action HTTP/1.1" status: 400 len: 333 time: 0.1479030
I cannot see anything of value on the destination host itself.
New machines get scheduled there, so the compute service cannot reallybe down.
In this thread Travishttp://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-March/019944.htmldescribes a similar situation, however that was on Folsom, so I wonder if itis still applicable.
Would be great to get some outside opinion :)
Thanks!Daniel
--Daniel Schneller
Mobile Development Lead
CenterDevice GmbH | Merscheider Straße 1
| 42699 Solingen
tel: +49 1754155711 | Deutschland
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com