Adding OSDs without ceph-deploy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I use my own scripted method based on the documentation:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/

Just remember to run "ceph osd create" _without_ a UUID, then get the OSD number from the output. Here's a quick and dirty version:

OSD=`ceph osd create`
[update ceph.conf if necessary]
mkdir -p /var/lib/ceph/osd/ceph-${OSD}
mkfs_opts=`ceph-conf -c /etc/ceph/ceph.conf -s osd --lookup osd_mkfs_options_xfs`
mount_opts=`ceph-conf -c /etc/ceph/ceph.conf -s osd --lookup osd_mount_options_xfs`
dev=`ceph-conf -c /etc/ceph/ceph.conf -s osd.${OSD} --lookup devs`
mkfs.xfs ${mkfs_opts} ${dev}
mount -t xfs -o ${mount_opts} ${dev} /var/lib/ceph/osd/ceph-${OSD}
ceph-osd -c /etc/ceph/ceph.conf -i ${OSD} --mkfs --mkkey
ceph auth del osd.${OSD} # only if a prior OSD had this number
ceph auth add osd.${OSD} osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-${o}/keyring

Then set up your CRUSH map and start the OSDs.

On Jul 30, 2014, at 6:39 AM, Alex Bligh <alex at alex.org.uk> wrote:

> I use a scripted installation of ceph without ceph-deploy, which works fine on 0.63. On 0.80 it fails to add the OSDs. In this scenario the local OSDs are all listed in ceph.conf.
> 
> It runs:
>  mkcephfs --init-local-daemons osd -d blah
> 
> which creates the OSDs (as in they are there on the file system):
> 
> # ls /var/lib/ceph/osd/ceph-*
> /var/lib/ceph/osd/ceph-0:
> ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  whoami
> 
> /var/lib/ceph/osd/ceph-1:
> ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  whoami
> 
> /var/lib/ceph/osd/ceph-2:
> ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  whoami
> 
> /var/lib/ceph/osd/ceph-3:
> ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  whoami
> 
> 
> However I get:
> 
> # service ceph start
> === mon.a ===
> Starting Ceph mon.a on extility-qa2-test...already running
> === osd.0 ===
> Error ENOENT: osd.0 does not exist.  create it before updating the crush map
> failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.0 --keyring=/var/lib/ceph/osd/ceph-0/keyring osd crush create-or-move -- 0 0.09 host=extility-qa2-test root=default'
> 
> and ceph status returns no osds:
> 
> root at extility-qa2-test:~# ceph status
>    cluster 68efa90e-20a3-4efe-9382-38c8839aa6b0
>     health HEALTH_ERR 768 pgs stuck inactive; 768 pgs stuck unclean; no osds
>     monmap e1: 1 mons at {a=10.157.208.1:6789/0}, election epoch 2, quorum 0 a
>     osdmap e1: 0 osds: 0 up, 0 in
>      pgmap v2: 768 pgs, 3 pools, 0 bytes data, 0 objects
>            0 kB used, 0 kB / 0 kB avail
>                 768 creating
> 
> I'm fully aware there is a newer way to do this, but I'd like this route to work too if possible.
> 
> Is there some new magic I need to do to get ceph to recognise the osds? (again without ceph-deploy)
> 
> -- 
> Alex Bligh
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux