Hello, On Sat, 12 Sep 2015 17:11:04 +0200 Stefan Eriksson wrote: > Hi, > > I'm reading the documentation about creating new OSD's and I'm see: > > "The foregoing example assumes a disk dedicated to one Ceph OSD Daemon, > and a path to an SSD journal partition. We recommend storing the journal > on a separate drive to maximize throughput. You may dedicate a single > drive for the journal too (which may be expensive) or place the journal > on the same disk as the OSD (not recommended as it impairs performance). > In the foregoing example we store the journal on a partitioned solid > state drive." From: > http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/ > > So I would like to create my journals on the same SSD as I have my OS > (RAID1). Is this good practise to initiate a new disk with: > Which SSDs (model)? Some SSDs are patently unsuited for OSD journals, while others will have no issues keeping up with the OS and journal duties. The sda below suggest that your RAID1 is a HW one? That's a bad choice on two counts, a HW RAID can't be TRIM'ed last I checked. And you would get a lot more performance out of a software RAID1 with journals on both SSDs. A RAID 1 still might be OK if you can/want trade performance for redundancy. > ceph-deploy disk zap osdserver1:sdb > ceph-deploy osd prepare osdserver1:sdb:/dev/sda > I'm not a ceph-deploy expert or fan, but I'm pretty sure you will need to create the partitions beforehand and then assign them accordingly. And using uuids makes things renumbering proof: ceph-deploy osd prepare ceph-04:sdb:/dev/disk/by-id/wwn-0x55cd2e404b73d348-part4 Christian > parted info on sda is: > > Number Start End Size File system Name Flags > 1 1049kB 211MB 210MB ext4 boot > 2 211MB 21.2GB 21.0GB ext4 > 3 21.2GB 29.6GB 8389MB linux-swap(v1) > > There is enough space for many 5G journal patitions on sda -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Fusion Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com