We want to use this script as a service for start/stop (but it wasn't tested yet): #!/bin/bash # chkconfig: - 50 90 # description: make a journal for osd.0 in ram start () { -f /dev/shm/osd.0.journal || ceph-osd -i 0 --mkjournal } stop () { service ceph stop osd.0 && ceph-osd -i osd.0 --flush-journal && rm -f /dev/shm/osd.0.journal } case \$1 in start) start;; stop) stop;; esac Also, we didn't see any noticeable improvements with rbd-caching, but we didn't performed any tests to measure it, just how we feel it. On Thu, Feb 5, 2015 at 12:09 AM, Daniel Schwager <Daniel.Schwager@xxxxxxxx> wrote: > Hi Cristian, > > >> We will try to report back, but I'm not sure our use case is relevant. >> We are trying to use every dirty trick to speed up the VMs. > > we have the same use-case. > >> The second pool is for the tests machines and has the journal in ram, >> so this part is very volatile. We don't really care, because if the >> worst happens and we have a power loss we just redo the pool and start >> new instances. Journal in ram did wonders for us in terms of >> read/write speed. > > How do you handle a reboot of a node managing your pool having the journals in RAM? > All the mon's knows about the volatile pool - do you have remove & recreate the > pool automatically after rebooting this node? > > Did you tried to enable rdb-caching? Is there a write-performance benefit using > journal @RAM instead of enable rbd-caching on client (openstack) side ? > I thought with rbd-caching the write performance should be fast enough. > > regards > Danny _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com