On Tue, 14 Jan 2014, Gregory Farnum wrote: > On Tuesday, January 14, 2014, Christian Balzer wrote: > Also on that page we read: > "Since the cache is local to the client, there?s no coherency if > there are > others accesing the image. Running GFS or OCFS on top of RBD > will not work > with caching enabled." > > This leads me to believe that enabling the RBD cache (or having > it enabled > somehow in the kernel module) would be a big no-no when it comes > to > KVM/qemu live migrations, seeing how the native KVM disk caching > also > needs to be disabled for it to work. > > > I'm not sure about this, as I'm not sure what KVM requires here. The kernel > client is safe for the named systems, period, though -- it looks and > behaves just like a normal block device. The problem with the userspace > cache is that it makes the local node behave the same way as a disk's > RAM cache would -- so if you tell something to be durable, it will be, but > you don't get coherency between different mounts because tr cache is all > local. This is safe. My understanding qemu issues a flush (or is detached, or something) during the migration process so the cache is written back to rados before things flip over. (As an aside, a flush by the guest kernel also triggers an rbd flush, so rbd with caching is fully crash safe too, just like the cache found in a SATA/SAS disk). sage > -Greg > > > Am I correct in that assumption? > > Regards, > > Christian > -- > Christian Balzer Network/Systems Engineer > chibi@xxxxxxx Global OnLine Japan/Fusion Communications > http://www.gol.com/ > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > -- > Software Engineer #42 @ http://inktank.com | http://ceph.com > >
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com