Icehouse & Ceph -- live migration fails?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

To ensure if compute-service are UP you should verify output of 'nova 
service-list'.
You can verify compute's log on source & destination and your NTP sync.

By default Openstack doest'nt allow live-migration, you need to modify 
live_migration_flag on nova.conf and add listen support to libvirt.
I use the same setup as you, but on Ubuntu12.04 (package are the same) 
and live-migration works perfectly.

Good luck.

---------------


Le 25/09/2014 17:34, Daniel Schneller a ?crit :
> Hi!
>
> We have an Icehouse system running with librbd based Cinder and Glance
> configurations, storing images and volumes in Ceph.
>
> Configuration is (apart from network setup details, of course) by the
> book / OpenStack setup guide.
>
> Works very nicely, including regular migration, but live migration of
> virtual machines fails. I created a simple machine booting from a volume
> based off the Ubuntu 14.04.1 cloud image for testing.
>
> Using Horizon, I can move this VM from host to host, but when I try to
> Live Migrate it from one baremetal host to another, I get an error
> message ?Failed to live migrate instance to host ?node02?".
>
> The only related log entry I recognize is in the controller?s 
> nova-api.log:
>
>
> 2014-09-25 17:15:47.679 3616 INFO nova.api.openstack.wsgi 
> [req-f3dc3c2e-d366-40c5-a1f1-31db71afd87a 
> f833f8e2d1104e66b9abe9923751dcf2 a908a95a87cc42cd87ff97da4733c414] 
> HTTP exception thrown: Compute service of 
> node02.baremetal.clusterb.centerdevice.local is unavailable at this time.
> 2014-09-25 17:15:47.680 3616 INFO nova.osapi_compute.wsgi.server 
> [req-f3dc3c2e-d366-40c5-a1f1-31db71afd87a 
> f833f8e2d1104e66b9abe9923751dcf2 a908a95a87cc42cd87ff97da4733c414] 
> 10.102.6.8 "POST 
> /v2/a908a95a87cc42cd87ff97da4733c414/servers/0f762f35-64ee-461f-baa4-30f5de4d5ddf/action 
> HTTP/1.1" status: 400 len: 333 time: 0.1479030
>
> I cannot see anything of value on the destination host itself.
>
> New machines get scheduled there, so the compute service cannot really
> be down.
>
> In this thread Travis
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-March/019944.html 
>
> describes a similar situation, however that was on Folsom, so I wonder 
> if it
> is still applicable.
>
> Would be great to get some outside opinion :)
>
> Thanks!
> Daniel
>
> -- 
> Daniel Schneller
> Mobile Development Lead
>
> CenterDevice GmbH                  | Merscheider Stra?e 1
>                                    | 42699 Solingen
> tel: +49 1754155711                | Deutschland
> daniel.schneller at centerdevice.com 
> <mailto:daniel.schneller at centerdevice.com>  | www.centerdevice.com 
> <http://www.centerdevice.com>
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140926/ee251eb0/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux