Hi, rbd_cache is client config only, so no need to restart osd. if you set cache=writeback in libvirt, it'll enable it, so you don't need to setup rbd_cache=true in ceph.conf. (it should override it) you can verify it enable, doing a sequantial write benchmark with 4k block. you should have a lot more bandwidth with cache=writeback ----- Mail original ----- De: "Межов Игорь Александрович" <megov@xxxxxxxxxx> À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx> Envoyé: Mardi 5 Mai 2015 13:09:48 Objet: Turning on rbd cache safely Hi! After examining our running OSD configuration through an admin socket we suddenly noticed, that "rbd_cache" parameter is set to "false". Till that moment, I suppose, that rbd cache is entirly client-side feature, and it is enabled with "cache=writeback" parameter in libvirt VM xml definition. Now I think, that I'm wrong. I plan to do this steps: 1. Add group of rbd cache parameters to master ceph.conf: [client] rbd cache = true rbd cache writethrough until flush = true rbd cache size = <256-1024Mb> rbd cache max dirty = <0.5-0.75*rbd_cache_size> 2. Push config to all nodes: ceph-deploy push config <nodeNN> 3. Set "noout" to cluster and restart all OSDs one-by-one, than change back "noout". 4. Check config od running OSDs through admin socket So, is that right and safe way to turn on rbd cache on a running prodction cluster? Do I need to specify the same rbd cache parameters on client hosts, running VMs with librbd backend? PS: We use Firefly 0.80.7 on Debian Wheezy x86_64, 5 nodes, 58 OSDs, journals on SSD. Thanks for your answers! Megov Igor CIO Yuterra _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com