On 07/18/2013 11:32 AM, Maciej Gałkiewicz wrote:
On 18 Jul 2013 20:25, "Josh Durgin" <josh.durgin@xxxxxxxxxxx <mailto:josh.durgin@xxxxxxxxxxx>> wrote: > Setting rbd_cache=true in ceph.conf will make librbd turn on the cache > regardless of qemu. Setting qemu to cache=none tells qemu that it > doesn't need to send flush requests to the underlying storage, so it > does not do so. This means librbd is caching data, but qemu isn't > telling it to persist that data when the guest requests it. This is > the same as qemu's cache=unsafe mode, which makes it easy to get a > corrupt fs if the guest isn't shut down cleanly. > > There's a ceph option to make this safer - > rbd_cache_writethrough_until_flush. If this and rbd_cache are true, > librbd will operate with the cache in writethrough mode until it is > sure that the guest using it is capable of sending flushes (i.e. qemu > has cache=writeback). Perhaps we should enable this by default so > people are less likely to accidentally use an unsafe configuration. Ok. Now it make sense. So the last question is how to make sure that qemu actually operates with cache=writeback with rbd?
If the setting is in the qemu command line, it'll send flushes, and you can verify that librbd is seeing them by doing a 'perf dump' on the admin socket and looking at the aio_flush count there. This makes me notice that the synchronous flush perf counter went missing, so it'll always read 0 [1]. Josh [1] http://tracker.ceph.com/issues/5668 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com