I too use a scripted version of the above documentation: https://github.com/ashishchandra1/ceph_install/blob/master/ceph_install.sh It works just fine, with couple of tweaks(if required), but hey it does the job. I use Trusty though, so I have Ceph firefly running with two OSDs. On Wed, Jul 30, 2014 at 11:37 PM, John Nielsen <lists at jnielsen.net> wrote: > I use my own scripted method based on the documentation: > http://ceph.com/docs/master/rados/operations/add-or-rm-osds/ > > Just remember to run "ceph osd create" _without_ a UUID, then get the OSD > number from the output. Here's a quick and dirty version: > > OSD=`ceph osd create` > [update ceph.conf if necessary] > mkdir -p /var/lib/ceph/osd/ceph-${OSD} > mkfs_opts=`ceph-conf -c /etc/ceph/ceph.conf -s osd --lookup > osd_mkfs_options_xfs` > mount_opts=`ceph-conf -c /etc/ceph/ceph.conf -s osd --lookup > osd_mount_options_xfs` > dev=`ceph-conf -c /etc/ceph/ceph.conf -s osd.${OSD} --lookup devs` > mkfs.xfs ${mkfs_opts} ${dev} > mount -t xfs -o ${mount_opts} ${dev} /var/lib/ceph/osd/ceph-${OSD} > ceph-osd -c /etc/ceph/ceph.conf -i ${OSD} --mkfs --mkkey > ceph auth del osd.${OSD} # only if a prior OSD had this number > ceph auth add osd.${OSD} osd 'allow *' mon 'allow rwx' -i > /var/lib/ceph/osd/ceph-${o}/keyring > > Then set up your CRUSH map and start the OSDs. > > On Jul 30, 2014, at 6:39 AM, Alex Bligh <alex at alex.org.uk> wrote: > > > I use a scripted installation of ceph without ceph-deploy, which works > fine on 0.63. On 0.80 it fails to add the OSDs. In this scenario the local > OSDs are all listed in ceph.conf. > > > > It runs: > > mkcephfs --init-local-daemons osd -d blah > > > > which creates the OSDs (as in they are there on the file system): > > > > # ls /var/lib/ceph/osd/ceph-* > > /var/lib/ceph/osd/ceph-0: > > ceph_fsid current fsid journal keyring magic ready store_version > superblock whoami > > > > /var/lib/ceph/osd/ceph-1: > > ceph_fsid current fsid journal keyring magic ready store_version > superblock whoami > > > > /var/lib/ceph/osd/ceph-2: > > ceph_fsid current fsid journal keyring magic ready store_version > superblock whoami > > > > /var/lib/ceph/osd/ceph-3: > > ceph_fsid current fsid journal keyring magic ready store_version > superblock whoami > > > > > > However I get: > > > > # service ceph start > > === mon.a === > > Starting Ceph mon.a on extility-qa2-test...already running > > === osd.0 === > > Error ENOENT: osd.0 does not exist. create it before updating the crush > map > > failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.0 > --keyring=/var/lib/ceph/osd/ceph-0/keyring osd crush create-or-move -- 0 > 0.09 host=extility-qa2-test root=default' > > > > and ceph status returns no osds: > > > > root at extility-qa2-test:~# ceph status > > cluster 68efa90e-20a3-4efe-9382-38c8839aa6b0 > > health HEALTH_ERR 768 pgs stuck inactive; 768 pgs stuck unclean; no > osds > > monmap e1: 1 mons at {a=10.157.208.1:6789/0}, election epoch 2, > quorum 0 a > > osdmap e1: 0 osds: 0 up, 0 in > > pgmap v2: 768 pgs, 3 pools, 0 bytes data, 0 objects > > 0 kB used, 0 kB / 0 kB avail > > 768 creating > > > > I'm fully aware there is a newer way to do this, but I'd like this route > to work too if possible. > > > > Is there some new magic I need to do to get ceph to recognise the osds? > (again without ceph-deploy) > > > > -- > > Alex Bligh > > > > > > > > > > _______________________________________________ > > ceph-users mailing list > > ceph-users at lists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > _______________________________________________ > ceph-users mailing list > ceph-users at lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- .- <O> -. .-====-. ,-------. .-=<>=-. /_-\'''/-_\ / / '' \ \ |,-----.| /__----__\ |/ o) (o \| | | ')(' | | /,'-----'.\ |/ (')(') \| \ ._. / \ \ / / {_/(') (')\_} \ __ / ,>-_,,,_-<. >'=jf='< `. _ .' ,'--__--'. / . \ / \ /'-___-'\ / :| \ (_) . (_) / \ / \ (_) :| (_) \_-----'____--/ (_) (_) (_)_______(_) |___:|____| \___________/ |________| \_______/ |_________| Thanks and Regards Ashish Chandra Openstack Developer, Cloud Engineering Reliance Jio -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140731/1f6b8c43/attachment.htm>