On 03/12/2013 01:48 PM, Travis Rhoden wrote:
Hi Josh,
Thanks for the info. So if I want to do live migration with VMs that were
launched with boot-from-volume, I'll need to use virsh to do the migration,
rather than Nova. Okay, that should be doable. As an aside, I will
probably want to look at the OpenStack DB and figure out how to tell it
that the VM has moved to a different host. I'd rather there not be a
disconnect between Nova and libvirt about where the VM lives. =)
It's probably not too hard to edit nova to skip the checks when the
instance is volume-backed, but if you don't want to do that, libvirt
should be fine, and a bit more flexible.
Additionally, thanks for saying that the migration is safe with the RBD
cache enabled. I was going to ask that as well.
No problem, it's usually asked when talking about live
migration :).
Josh
On Tue, Mar 12, 2013 at 4:38 PM, Josh Durgin <josh.durgin@xxxxxxxxxxx>wrote:
On 03/12/2013 01:28 PM, Travis Rhoden wrote:
Thanks for the response, Trevor.
The root disk (/var/lib/nova/instances) must be on shared storage to run
the live migrate.
I would argue that it is on shared storage. It is an RBD stored in
Ceph,
and that's available at each host via librbd.
Agreed.
You should be able to run block migration (which is a different form of
the
live-migration) that does not require shared storage.
I think block-migration would not be correct in this instance. There
is no
file to copy (there is no disk file in /var/lib/nova/instances/<**
domain>).
Where is it going to copy it from/to? It's already an RBD.
I know this is supposed to work [1]. I'm just wondering if it requires
disabled the "true" live migration in libvirt. I think Josh will know.
Yes, it works with true live migration just fine (even with caching). You
can use "virsh migrate" or even do it through the virt-manager gui.
Nova is just doing a check that doesn't make sense for volume-backed
instances with live migration there.
Unfortunately I haven't had the time to look at that problem in
nova since that message, but I suspect the same issue is still
there.
Josh
[1] https://lists.launchpad.net/**openstack/msg15074.html<https://lists.launchpad.net/openstack/msg15074.html>
On Tue, Mar 12, 2013 at 4:13 PM, tra26 <tra26@xxxxxxxxxxxxx> wrote:
Travis,
The root disk (/var/lib/nova/instances) must be on shared storage to run
the live migrate. You should be able to run block migration (which is a
different form of the live-migration) that does not require shared
storage.
Take a look at: http://www.sebastien-han.fr/****
blog/2012/07/12/openstack-**<http://www.sebastien-han.fr/**blog/2012/07/12/openstack-**>
block-migration/<http://www.**sebastien-han.fr/blog/2012/07/**
12/openstack-block-migration/<http://www.sebastien-han.fr/blog/2012/07/12/openstack-block-migration/>
**for information regarding the block level migration.
-Trevor
On 2013-03-12 15:57, Travis Rhoden wrote:
Hey folks,
Im wondering if the following is possible. I have OpenStack (Folsom)
configured to boot VMs from volume using Ceph as a backend for Cinder
and Glance. My setup pretty much follows the Ceph guides for this
verbatim. Ive been using this setup for a while now, and its all
been really smooth.
However, I if I try do a live-migration, I get this:
RemoteError: Remote error: RemoteError Remote error:
InvalidSharedStorage_Remote vmhost3 is not on shared storage: Live
migration can not be used without shared storage.
One thing I am doing that may not be normal is that I am trying to do
the "true" live migration in KVM/libvirt, having set this in my
nova.conf:
live_migration_flag=VIR_****MIGRATE_UNDEFINE_SOURCE,VIR_**
MIGRATE_PEER2PEER,VIR_MIGRATE_****LIVE
Anyone know if this setup should work? Or if there is something I
should tweak to make it work? I was thinking that having the RBD
available via librbd at both the source and destination host makes
that storage shared storage. Perhaps not if I am trying to do live
migration? If I do OpenStacks normal "live" migration, it will pause
the VM and move it, which is less than ideal, but workable.
Thanks,
- Travis
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com