RBD cache questions (kernel vs. user space, KVM live migration)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

In http://ceph.com/docs/next/rbd/rbd-config-ref/ it is said that:

"The kernel driver for Ceph block devices can use the Linux page cache to
improve performance."

Is there anywhere that provides more details about this? 
As in, "can" implies that it might need to be enabled somewhere, somehow.
Are there any parameters that influence it like the ones on that page
mentioned above (which I assume are only for the user space aka librbd)?


Also on that page we read:
"Since the cache is local to the client, there’s no coherency if there are
others accesing the image. Running GFS or OCFS on top of RBD will not work
with caching enabled."

This leads me to believe that enabling the RBD cache (or having it enabled
somehow in the kernel module) would be a big no-no when it comes to
KVM/qemu live migrations, seeing how the native KVM disk caching also
needs to be disabled for it to work.

Am I correct in that assumption?

Regards,

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux