I use a scripted installation of ceph without ceph-deploy, which works fine on 0.63. On 0.80 it fails to add the OSDs. In this scenario the local OSDs are all listed in ceph.conf. It runs: mkcephfs --init-local-daemons osd -d blah which creates the OSDs (as in they are there on the file system): # ls /var/lib/ceph/osd/ceph-* /var/lib/ceph/osd/ceph-0: ceph_fsid current fsid journal keyring magic ready store_version superblock whoami /var/lib/ceph/osd/ceph-1: ceph_fsid current fsid journal keyring magic ready store_version superblock whoami /var/lib/ceph/osd/ceph-2: ceph_fsid current fsid journal keyring magic ready store_version superblock whoami /var/lib/ceph/osd/ceph-3: ceph_fsid current fsid journal keyring magic ready store_version superblock whoami However I get: # service ceph start === mon.a === Starting Ceph mon.a on extility-qa2-test...already running === osd.0 === Error ENOENT: osd.0 does not exist. create it before updating the crush map failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.0 --keyring=/var/lib/ceph/osd/ceph-0/keyring osd crush create-or-move -- 0 0.09 host=extility-qa2-test root=default' and ceph status returns no osds: root at extility-qa2-test:~# ceph status cluster 68efa90e-20a3-4efe-9382-38c8839aa6b0 health HEALTH_ERR 768 pgs stuck inactive; 768 pgs stuck unclean; no osds monmap e1: 1 mons at {a=10.157.208.1:6789/0}, election epoch 2, quorum 0 a osdmap e1: 0 osds: 0 up, 0 in pgmap v2: 768 pgs, 3 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 768 creating I'm fully aware there is a newer way to do this, but I'd like this route to work too if possible. Is there some new magic I need to do to get ceph to recognise the osds? (again without ceph-deploy) -- Alex Bligh