Re: Ceph and KVM live migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/02/2012 11:21 AM, Gregory Farnum wrote:
On Sat, Jun 30, 2012 at 8:21 PM, Vladimir Bashkirtsev
<vladimir@xxxxxxxxxxxxxxx>  wrote:
On 01/07/12 11:59, Josh Durgin wrote:

On 06/30/2012 07:15 PM, Vladimir Bashkirtsev wrote:

On 01/07/12 10:47, Josh Durgin wrote:

On 06/30/2012 05:42 PM, Vladimir Bashkirtsev wrote:

Dear all,

Currently I testing KVMs running on ceph and particularly testing
recent
cache feature. Performance is of course vastly improved but still have
occasional KVM hold ups - not sure who is at blame ceph of KVM. But I
will deal with it later. Right now I've got myself a question which I
could not get answered myself: if I do live migration of KVM while
there
some uncommitted data in ceph cache will this cache be committed prior
cut-over to another host? Reading through the list I've got an
impression that it may be left uncommitted and thus it may cause data
corruption. I just would like a simple confirmation if code which
commits cache on cut-over to new host does exist and no data corruption
due to RBD cache+live migration should happen.

Regards,
Vladimir


QEMU does a flush on all the disks when it stops the guest on the
original host, so there will be no uncommitted data in the cache.

Josh

Thank you for quick and precise answer. Now when I actually attempted to
live migrate ceph based VM I get:

Unable to migrate guest: Invalid relative path
'rbd/mail.logics.net.au:rbd_cache=true': Invalid argument

I guess KVM does not like having :rbd_cache=true (migration works
without it). I know that it is most likely KVM problem but still decided
to ask here in case if you know about it. Any ideas how to fix it?

Regards,
Vladimir


Is the destination librbd older and not supporting the cache option?

Migrating with rbd_cache=true and other options specified like that
worked in my testing.

Josh

Both installations are the same:
qemu 1.0.17
ceph 0.47.3
libvirt 0.9.12

I have googled around and found that if I call migration with --unsafe
option then it should go. And indeed: it works. Apparently this check
introduced in libvirt 0.9.12 . Did quick downgrade to libvirt 0.9.11 and no
problems migrating.

Have we checked if the live migrate actually does do the cache flushes
when you use the unsafe flag? That worries me a little!

The unsafe flag is purely a libvirt mechanism for bypassing libvirt's
format whitelist. It does not affect qemu at all.

In either case, I created a bug so we can try and make QEMU play nice:
http://tracker.newdream.net/issues/2685

The issue is with libvirt, not qemu. I sent a patch fixing it to the libvirt list:

http://www.redhat.com/archives/libvir-list/2012-July/msg00021.html

Josh
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux