Looks like it's just following the warnings from libvirt
https://bugzilla.redhat.com/show_bug.cgi?id=751631 heh, but I found one
of the inktank guys confirming that RBD was safe to add to the whitelist
last year
http://www.redhat.com/archives/libvir-list/2012-July/msg00021.html which
is good for a bit more piece of mind!
-Michael
On 15/01/2014 05:41, Christian Balzer wrote:
Hello,
Firstly thanks to Greg and Sage for clearing this up.
Now all I need for a very early Xmas is ganeti 2.10 released and a Debian
KVM release that has RBD enabled. ^o^
Meaning that for now I'm stuck with the kernel route in my setup.
On Wed, 15 Jan 2014 03:31:06 +0000 michael wrote:
It's referring to the standard linux page cache.
http://www.moses.uklinux.net/patches/lki-4.html which is not something
you need to set up.
I use ceph for an opennebula storage which is qemu-kvm based and have
had no issues with live migrations.
If the storage is marked "shareable" the live migrations are allowed
regardless of cache mode.
https://www.suse.com/documentation/sles11/singlehtml/book_kvm/book_kvm.html#idm139742235036576
That's an interesting read (and seemingly much more up to date than the
actual KVM docs) but not exactly unambiguous.
Stating that "The only cache mode which supports live migration on
read/write shared storage is cache = none." for KVM and for libvirt what
you mentioned, the "marked sharable" bit.
Also the latest stable ganeti has this to say when firing up a RBD backed
VM:
---
ganeti-noded pid=31781 WARNING KVM: overriding disk_cache setting
'default' with 'none' to prevent shared storage corruption on migration
---
libvirt with KVM on Debian stable (not the freshest, I know) also won't
let me use anything but "none" for KVM caching with a dual-primary
DRBD as backing device when requesting live migration.
Afaik a full FS flush is called just as it completes copying the memory
across for the live migration.
Yeah, that ought to do the trick nicely when dealing with RBD or page
cache.
Christian
-Michael
On 15/01/2014 02:41, Christian Balzer wrote:
Hello,
In http://ceph.com/docs/next/rbd/rbd-config-ref/ it is said that:
"The kernel driver for Ceph block devices can use the Linux page cache
to improve performance."
Is there anywhere that provides more details about this?
As in, "can" implies that it might need to be enabled somewhere,
somehow. Are there any parameters that influence it like the ones on
that page mentioned above (which I assume are only for the user space
aka librbd)?
Also on that page we read:
"Since the cache is local to the client, there’s no coherency if there
are others accesing the image. Running GFS or OCFS on top of RBD will
not work with caching enabled."
This leads me to believe that enabling the RBD cache (or having it
enabled somehow in the kernel module) would be a big no-no when it
comes to KVM/qemu live migrations, seeing how the native KVM disk
caching also needs to be disabled for it to work.
Am I correct in that assumption?
Regards,
Christian
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com