Hi Dane, If you deployed with ceph-deploy, you will see that the journal is just a symlink. Take a look at /var/lib/ceph/osd/<osd-id>/journal The link should point to the first partition of your hard drive disk, so no filesystem for the journal, just a block device. Roughly you should try: create N partition on your SSD for your N OSDs ceph osd set noout sudo service ceph stop osd.$ID ceph-osd -i osd.$ID --flush-journal rm -f /var/lib/ceph/osd/<osd-id>/journal ln -s /var/lib/ceph/osd/<osd-id>/journal /dev/<ssd-partition-for-your-journal> ceph-osd -i osd.$ID ?mkjournal sudo service ceph start osd.$ID ceph osd unset noout This should work. Cheers. On 11 Aug 2014, at 18:36, Dane Elwell <dane.elwell at gmail.com> wrote: > Hi list, > > Our current setup has OSDs with their journal sharing the same disk as > the data, and we've reached the point we're outgrowing this setup. > We're currently vacating disks in order to replace them with SSDs and > recreate the OSD journals on the SSDs in a 5:1 ratio of spinners to > SSDs. > > I've read in a few places that it's possible to move the OSD journals > without losing data on the OSDs, which is great, however none of the > stuff I've read seems to cover our case. > > We installed Ceph using ceph-deploy, putting the journals on the same > disks. ceph-deploy doesn't populate a ceph.conf file fully, so we > don't have e.g. individual OSD entries in there. > > If I'm understanding this correctly, the Ceph disks are automounted by > udev rules from /lib/udev/rules.d/95-ceph-osd.rules, and this mounts > the OSD disk (partition 1) then mounts the journal under /journal > (partition 2 of the same disk). > > That's all well and good, but as I now want to move the journal, how > do I go about telling Ceph where the new journals are located so they > can be mounted in the right location? Do I need to populate ceph.conf > with individual entries for all OSDs or is there a way I can make udev > do all the heavy lifting? > > Regards > > Dane > _______________________________________________ > ceph-users mailing list > ceph-users at lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com Cheers. ???? S?bastien Han Cloud Architect "Always give 100%. Unless you're giving blood." Phone: +33 (0)1 49 70 99 72 Mail: sebastien.han at enovance.com Address : 11 bis, rue Roqu?pine - 75008 Paris Web : www.enovance.com - Twitter : @enovance -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140811/d9773d04/attachment.pgp>