+1 for Wido Moreover, if you want to store the journal on a block device, you should partition your journal disk and assign one partition per OSD like /dev/sdb1, 2 , 3.... Again, osd journal = /dev/osd$id/journal is wrong, if you use this directive, this must point to a filesystem because the journal will be a file. Anyway, as far I'm concerned I didn't notice that much performance gain by using the journal on a block device. At the end, just put the journal on a dedicated formatted partition since the filesystem overhead is not that big. So just keep the osd journal = /dev/osd$id/journal but change it for something like osd journal = /srv/ceph/journals/osd$id/journal. Cheers. -- Regards, Sébastien Han. On Thu, Feb 14, 2013 at 2:52 PM, Joao Eduardo Luis <joao.luis@xxxxxxxxxxx> wrote: > Including ceph-users, as it feels like this belongs there :-) > > > > On 02/14/2013 01:47 PM, Wido den Hollander wrote: >> >> On 02/14/2013 11:24 AM, charles L wrote: >>> >>> >>> Pls can someone help me with the ceph.conf for 0.56.2. I have two >>> servers for STORAGE with 3tb hard drives each and two SSD's each. I >>> want to use OSD data on the hard drive and osd journal on SSD. >>> >>> I want to know how osd journal configuration is set to SSD. My SSD is >>> mounted on /dev/sdb. >>> >>> I have tried the osd data configurations devs = /dev/sda and it >>> worked just good. >>> >>> Is this line correct "osd journal = /dev/osd$id/journal" ?? and " osd >>> journal = /dev/sdb " ??? >>> >> >> In the osd specific (osd.0 and osd.1) sections you override the journal >> settings made in the [osd] section. >> >> They are not needed, since you give the whole block device (/dev/sdb) to >> the OSD as a journal. >> >> Are you sure /dev/sda is available for the OSD and it's not your boot >> device? >> >> Wido >> >>> [global] >>> >>> auth cluster required = cephx >>> auth service required = cephx >>> auth client required = cephx >>> debug ms = 1 >>> >>> [osd] >>> osd journal size = 1000 >>> osd journal = /dev/osd$id/journal >>> filestore xattr use omap = true >>> osd mkfs type = xfs >>> osd mkfs options xfs = -f >>> osd mount options xfs = rw,noatime, >>> >>> [osd.0] >>> host = server04 >>> devs = /dev/sda >>> osd journal = /dev/sdb >>> [osd.1] >>> host = server05 >>> devs = /dev/sda >>> osd journal = /dev/sdb >>> >>> THANKS. -- >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >>> the body of a message to majordomo@xxxxxxxxxxxxxxx >>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>> >> >> > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com