It's referring to the standard linux page cache.
http://www.moses.uklinux.net/patches/lki-4.html which is not something
you need to set up.
I use ceph for an opennebula storage which is qemu-kvm based and have
had no issues with live migrations.
If the storage is marked "shareable" the live migrations are allowed
regardless of cache mode.
https://www.suse.com/documentation/sles11/singlehtml/book_kvm/book_kvm.html#idm139742235036576
Afaik a full FS flush is called just as it completes copying the memory
across for the live migration.
-Michael
On 15/01/2014 02:41, Christian Balzer wrote:
Hello,
In http://ceph.com/docs/next/rbd/rbd-config-ref/ it is said that:
"The kernel driver for Ceph block devices can use the Linux page cache to
improve performance."
Is there anywhere that provides more details about this?
As in, "can" implies that it might need to be enabled somewhere, somehow.
Are there any parameters that influence it like the ones on that page
mentioned above (which I assume are only for the user space aka librbd)?
Also on that page we read:
"Since the cache is local to the client, there’s no coherency if there are
others accesing the image. Running GFS or OCFS on top of RBD will not work
with caching enabled."
This leads me to believe that enabling the RBD cache (or having it enabled
somehow in the kernel module) would be a big no-no when it comes to
KVM/qemu live migrations, seeing how the native KVM disk caching also
needs to be disabled for it to work.
Am I correct in that assumption?
Regards,
Christian
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com